Some tricks to use when dealing with databases

Warning:

  • web2py’s only knwoledge about table structure is from web2py itself.
    to be more specific, from the files created in the folder databases in side web2py’s main directory.
    If you change the tables outside web2py, web2py gets confused, and you have set migrate=False.
  • If your database connection is not established, even by specifying the right connection string, make sure you’ve specified the right port in the connection string, your database may be running on a different port other than the default one.
  • to use a field of type reference, you’ve to reference another field in a table existing in the same database, you can’t reference a field in a table in another database, use field of type ‘integer’ instead and becareful with your queries.
  • It is actually easy to add a new DB. everything is one file gluon/sql.py
    • find a python module that interface to MaxDB
    • identify the connection strings
    • add translation strings (to of the gluon/sql.py) file
    • run the tests (python gluon/sql.py)
  • Database select syntax:
    db(query).select(*fields,orderby=,groupby=,limitby=)
    

    So you can make something like:

     dbref=db 
     qry=db.address.customer==2 
     fields=['number','city','state'] 
     sel=[db.address[field] for field in fields] 
     rows=dbref(qry).select(*sel) # note the * 
    
  • Note that : db((q1) & (q2)).select() is equivalent to db(q1)(q2)
    so:

     row=db((db.people.name==request.args(0))&  
                    (db.people.company_id==db.company.id)).select().first()
      people_name=row.people.name 
      company_name=row.company.name 
    

    Notice that you can can also define :

         people_and_their_companies=db(db.people.company_id==db.company.id) 
         row=people_and_their_companies(db.people.name==request.args (0)).select().first() 
    

    and wow !!! congratulations you’ve just made an automatic inner join :
    then you can do :

         people_name=row.people.name 
         company_name=row.company.name 
    
  • How to define a table with many number of fields easily?

    Do something like this:

    fields=[Field('addr%02i' %i,'string') for i in range(21)]
    db.define_table('name',*fields)
    
  • If multiple applications access the same database, for every table,
    only one app should have migrate=True so you know who is altering tables. It is not necessary that all apps have the same define_table as long as each define_table defines a subset of existing fields.
    This is not a problem as long as migrate=False.
    If you want one app to change a table definition and all apps to
    reflect that change, yes, you have to change them all.
    This is because changing the table definition may break an app.

  • Difference between requires, required, and notnull

  • requires
    

    enforces validation on the form level.
    Most of us are well experienced with it, so no need for more talking about it.

  • # model
    db.define_table('my_table', Field('soso', 'string', notnull=True))
    #controller
    def index():
        
        form = SQLFORM(db.my_table)
        if form.accepts(request.vars, session, dbio=False):
            db.my_table.insert(soso=None)
            response.flash = T('Accepted')
        elif form.errors:
            response.flash = T('Not sccepted')
        else:
            pass 
        rows = db(db.my.id>0).select()
        return dict(form=form, rows=rows)
    
    

    In this example, I used dbio=False with the accepts() function to make automatic database manipulation stop.
    Now, I’ve full control to do what I want to do.
    Trying to do:

    db.my.insert(soso=None)
    

    while using notnull=True, in table definition in my model, will cause the database to refuse the insertion and you’ll get the error flash message:
    ‘not accepted’ as well as an error message ‘enter a value’.
    This shows you that this validation is enforced on the [database level].

    In fact if you changed the code to :

    ......
    if form.accepts(request.vars, session):
         response.flash = ...
    elif form.errors:
         response.flash = ...
    else:
        pass
    
    

    You’ll get the same error when trying to submit the form without any value in it.
    In the former example using dbio=False even if you’re inserting any value it will fail since am always trying to insert Null into the database field using insert() manually.
    I just wanted to show you that validation is enforced on insert() and on database not form.

  • required=True
    

    Is enforced at the level of db.table.insert, tells web2py that some value (including “” and Null) has to be specified.
    So the database will accept Null values and empty strings.

    #model
    db.define_table('my_', Field('soso',required=True), Field('toto'), Field('lolo'))
    #controller
    def index():
        
        form = SQLFORM(db.my_)
        if form.accepts(request.vars, session, dbio=False):
            db.my_.insert(soso='')  # ok
            db.my_.insert(soso=None, lolo='ss', toto='')      #ok
            db.my_.insert(lolo='ss', toto='')  # Not OK 'required field not specified'
            response.flash = T('Accepted')
        elif form.errors:
            response.flash = T('Not sccepted')
        else:
            pass
        rows =db(db.my_.id>0).select()
        return dict(form=form, s=rows)
    

    In the previous example, you see that a required field has to be specified when ever using insert statement.


  • Using db.executesql()

    Sometimes you just need to execute sql your self, then db.executesql happens to be handy in those situations.

    rawrows=db.executesql(yoursql) -> returns [(col_name, value), ((col2_name, value)), ((col3_name, value))]
    

    Notice that rows are not parsed therefore you cannot say
    rawrows[i].colname but you have to say :
    rawrows[i][j], where j is the col number as returned by the query.
    This may seem confusing in the beginning but in fact if you made something like:

    r=db.executesql('select cola,colb from mytable;')
    

    r[0][0] is cols and r[0][1] is col b. It does not matter in which
    order they were in the table definition[All that amatters is the order with which you made the query].

    ex:

    #model
    db.define_table('my_table', Field('soso'), Field('toto'), Field('lolo'))
    
    # supposing you've entered data[soso='a', toto='b', lolo='c'] in table fields using appadmin interface .
    
    #controller
    def index():    
        r = db.executesql('select soso, lolo from my_table')
        print r  # [(u'a', u'c')]
        print r[0][0], r[0][1] #a c
        return dict()
    

    One thing to note when using executesql, is that you need to escape strings so that you’re not vulnerable to sql injection attacks.
    though, when you’re using DAL functions, you don’t need to do this since it’s automatic.


    You can do things like:

    orderby=db.table.field.upper()
    #or
    orderby=db.table.datefield.month()
    

    Can I have a random orderby directly using web2py DAL?
    yes, use :

    orderby='<random>'
    

    Interesting right ? 😀


    An example showing how can indexes speed things up

    Q:
    I defined a model with two tables, let web2py create them (tried this
    with both sqlite and mysql), and then imported data (about 1,400 rows
    and 140,000 rows, respectively) into the tables using external
    scripts. After doing so I found my app became very slow – it took
    10-15 seconds to respond. Then I tried the csv import function in
    web2py, which worked, and there is no slowness at all. Why does this
    happen?

    A:
    The one thing that can make it slow is the absence of indices. You
    need to create indices on fields you use for searching. If you use
    sqlite you can do, after define_table:

    db.executesql('CREATE INDEX IF NOT EXISTS ON table colname;')
    

    For postgres, mysql, and oracle you are better off creating the index
    outside web2py.


    What else other than not creating indexes when I’ve a big number of records could affect the performance of my database ?

    If you’re making joins using the native SQL syntax ma be using executesql(), then , the db may be performing your joins
    in the order you’ve specified. This is a SQL standard, and in those cases there will not be any optimizations. Therefore, when ever you think about making joins using Traditional SQL, make sure that you do the most discriminating joins first (the ones smallest set of results).

    Using DAL: The query planner in the db may be optimizing the query for you and order of joins isn’t so important.


    What would be the simplest way to create an entry form for a new record, pre-filled with the fields from another record in the same
    table (knowing that record’s id) ?

    Assuming….
    db.define_table(‘mytable’,…)

    and the id of the source record is source_id, you can do:

    # get the record 
    row=db(db.mytable.id==source_id).select().first()
    # set the default for all fields by ID 
    for fieldname in db.mytable.fields: 
          if fieldname!='id': db.mytable[fieldname].default=row[fieldname] 
    form=SQLFORM(db.mytable) 
    

    Can you give me some examples on using (joins) in DAL ?

  • Ex:1
    Given :

    db.define_table('children',   Field('user_name' )) 
    db.define_table('child_profiles', Field('child_id',  db.children), Field('first_name' ), ...) 
    db.define_table('buddies', Field('buddy_id' , db.children),Field('child_id' , db.children), ..) 
    

    You can do :

     
    child_id=....(may be request.args(0)) #or anything else according to the logic you're using in your application.
    rows=db(db.buddies.child_id==child_id) 
                (db.buddies.buddy_id==db.child_profiles.child_id).select(db.buddies.ALL) 
    
    # db(cond1)(cond2) is the same as db((cond1)&(cond2))
    
    rows=db(db.buddies.child_id==child_id) 
                (db.buddies.buddy_id==db.children.id) 
                (db.buddies.buddy_id==db.child_profiles.child_id).select()
    
     for row in rows:
        print row.childer.user_name, 
               row.child_profiles.first_name
     
    
  • Ex:2

    Q:
    I have 2 tables, “owner” and “dog”. The “dog” table has a many to one relation to owners. If I have an an owner id, how can I make a join based on the owner id?
    I did this:

    present_owner = db((db.owner.id==session.user_id) &
    ( db.dog.owner_id==session.user_id)).select()
    

    But when I wanted to retrieve data it traced back, for example:
    present_owner.user_name.
    What is the proper way to retrieve the data from the join?
    Thanks

    Ans:

    * For your query to look more like a join, you may rewrite it as :

     rows = db((db.owner.id== db.dog.owner_id) &
                    ( db.owner.id==session.user_id)
                   ).select()
    

    rows now contain:

    owner = rows[0].owner (rows.first().owner) # an owner
    dog = rows[0].dog  (rows.first().dog)      # a dog that belongs to that owner.
    # then you can get more info using 
    owner.name, dog.name, .......
    
    
  • Ex:3
    Q: How 2 make 2 inner joins using the following tables?

    db.define_table('table1', Field('name'))
    db.define_table('table2', Field('table1_id', 'table3_id'))
    db.define_table('table3', Field('name'))
    

    Ans :

    rows= db((db.table2.table1_id==db.table1.id)&
         (db.table2.table3_id==db.table3.id)).select()
    
    for row in rows:
         print row.table2.id, row.table1.name, row.table3.name
    
  • Q: Does DAL support outer joins?

    Ans:
    Yes, web2py does left outer joins.BUT they require an explicit “left” keyword otherwise you’re trying to make an inner join.
    Example:

     db().select(db.table1.ALL,db.table2.ALL,left=db.table2.on(....==.....))
    

    no need for right joins since the functionality is the same as left joins .
    You can check the manual for discussions and examples about that .

    In fact, you could do something like:

    db.define_table('action_queue', Field('user_id','integer')) 
    db.define_table('unprocessed', Field('action_queue_id','integer')) 
    db (db.action_queue.id==db.unprocessed.action_queue_id).select (left=db.unprocessed) 
    

    BUT : This is an old notation which is supported only for backward comaptibility and works on sqlite. It should be:

    db ().select(db.action.ALL,db.unprocessed.ALL,left=db.unprocessed.on 
     (db.action_queue.id==db.unprocessed.action_queue_id)) 
    

  • Q: How to use sub-queries?
    Ans:

    q = using db()._select()
    print q
    

    you can get the real SQL query used by DAL to perform some sort of selection .
    Using the same concept suppose you’ve :-

    db.define_table('a', Field('f1'))
    db.define_table('b', Field('f1'), Field('f2'))
    
    # a includes the following records:
    a.id   a.f1
    3       1
    4       2
    5       3
    6       4
    
    # b includes the following records:
    b.id	b.f1	b.f2
    3       1        y
    4       4        x
    

    Now suppose you want to perform the following query :

    SELECT b.f2 FROM b WHERE b.f1 IN (SELECT a.f1 FROM a);
    


    So what to do ?

    you’ve 2 options :

    • using :
      db.executesql('SELECT b.f2 FROM b WHERE b.f1 IN (SELECT a.f1 FROM a);')
      
    • Generate the sub-query using db()._select(), then use it in another query using DAL syntax :
      def index():   
         q = db()._select(db.a.f1)
         rows = db(db.b.f1.belongs(q)).select(db.b.f2)
         print db._lastsql
         return dict(rows=rows)
      

    great !!! and db._lastsql will print exactly the same query you’ wanted:

    SELECT b.f2 FROM b WHERE b.f1 IN (SELECT a.f1 FROM a);
    

    and the answer is :-

    b.f2
    hamdy
    x
    
  • belongs can accept a set or a “select…” string and _select() as you may have noticed creates a query string.



    Q:

    I’ve got three tables: “companies”, “region”, and “items”.
    In “companies” there is a “region_id” column. In “items” there are “companies_id” and “price” columns. Is there a simple way to select the top 10 companies which belong to a particular region and whose item prices sum to the highest amount?

    A:

    First, check the “Grouping and counting section in this chapter of the manual”

    Try this:
    
    rows=db(db.companies.regio_id==region_id)
    (db.items.company_id==db.companies.id).select(db.companies.ALL,'sum(items.price)',groupby=db.items.company_id,orderby='sum(items.price)  desc',limitby=(0,10))
    
    for row in rows:
        prin row.companies.name,row._extra['sum(items.price)'] 
    

    A better way to do it a more recent way actually[the previous will work and still work for backward compatibility]:

    you may make:

    summation = db.items.price.sum()
    ..... .select(db.companies.ALL,summation,groupby=.....
    for row in rows:
        prin row.companies.name,row._extra[summation] 
    

    The first method is always there to help if the function you’re trying to use is not implemented(yet) in web2py.

    Note:
    in fact doing something like:

    str(db.table.field.max())  # assuming field is numeric
    

    will print:

    'MAX(table.field)'
    

    Wooooooooooooooow, so they’re the same

    So To understand this, I’ll summarize:

    some backend SQL DBMS support some functions and don’t support others
    if function is supported by all backend DBMS supported by web2py, the function will easily be implemented as:

    db.table.field.function()
    

    if not , you can still use it [assuming you know what to do and the DBMS you’ll always use support it]
    using a query string directly that specify your query

    So in the following example, you’ll get things better :
    Q:

    I’m using web2py with PostgreSQL. I can use ‘like’ and ‘belongs’ to construct simple SQL query, but I didn’t find anything equivalent to regular expression matching operator ~ or ~*. Did I miss something here? Or do I have to use Python to do regex on results returned from SQL?

    A:

    The fact is that postgresql supports it but many backends do not therefore there it no API for it. You have two choices :

    • write the query in SQL:

      query="table.field SIMILAR TO '(a|b)'" 
      rows=db(query).select(db.table.ALL) 
      
      do the pattern macthing in python on the returned values :

          r=re.compile('(a|b)') 
          rows=db().select(db.table.ALL) 
          rows.response=[x for i,x in enumerate(rows.response) if 
      r.match(rows[i].field)] 
      
  • Take another example to make sure you got all this stuff ? OK and you’re welcome

    Q:
    Assuming I’m having:

    # Model:-
    db.define_table('members', Field('name'))
    db.define_table('resources', Field('resource_tier', 'integer' ),
                                  Field('resource_amount', 'integer'))
    db.define_table('deposits', Field('depositer_id', 'reference members',
                                                              requires=IS_IN_DB(db, db.members.id, '%(name)s')),   
                                                    Field('resource_id', 'reference resources',
                                                     requires=IS_IN_DB(db, db.resources.id)))
    
    

    How can I translate this kind of sql statements:

    SELECT members.name, sum(resources.resource_amount) FROM resources, deposits, members WHERE ((members.id=deposits.depositer_id AND deposits.resource_id=resources.id) AND resources.resource_tier=0) GROUP BY deposits.depositer_id;
    

    A:

    summation = db.resources.resource_amount.sum()
        rows = rows=db((db.members.id==db.deposits.depositer_id)& 
                       (db.deposits.resource_id==db.resources.id)& 
                       (db.resources.resource_tier==0)).select(db.members.name,summation
                        ,groupby=db.deposits.depositer_id) 
        for row in rows: print row.members.name,row._extra[summation] 
    
    

    Now time to another question on the same example:

    Q:
    How could I translate this to web2py syntax ?

    SELECT members.name, SUM(resource_amount*(1-ABS(resource_tier))), SUM(resource_amount*(1-ABS(resource_tier-1))), SUM(resource_amount*(1-ABS(resource_tier-2))) FROM members LEFT JOIN deposits,resources WHERE (deposits.depositer_id=members.id AND deposits.resource_id=resources.id) GROUP BY members.name;
    
    

    A:

    member_depositer=(db.deposits.depositer_id==db.members.id)
        deposited_resource=(db.deposits.resource_id==db.resources.id) 
        sqlrows=db(member_depositer&deposited_resource).select(db.members.name, 
                  'SUM(resource_amount*(1-ABS(resource_tier)))', 
                  'SUM(resource_amount*(1-ABS(resource_tier-1)))', 
                  'SUM(resource_amount*(1-ABS(resource_tier-2)))', 
                  groupby=db.members.name,left=(db.deposits,db.resources))
        print sqlrows
    

    Q:
    Having 2 fields of type datetime, How can I order records according to the difference between them?

    A:

    rows = db ((db.ck.user_id==session.userid) & (db.ck.c_time>d1)& (db.ck.c_time<d2)).select (db.ck.ALL, db.c.ALL, left=db.ck.on 
    (db.ck.id==db.c.ck_id),orderby='DATEDIFF(c.c_time, ck.c_time)') 
    
    for row in rows: print (row.c.c_time-row.ck.c_time).days 
    

    DATEDIFF() should be supported by your dbms

    Q:
    assuming:

    db.define_table('flowers', 
       Field('name'), 
       Field('category'), 
       Field('price', 'float'), 
       Field('quantity', 'integer')) 
    

    I want to create a column called TotalPrice that multiplies price and quantity.

    A:

    TotalPrice='price*quantity' 
    rows=db().select(db.flowers.ALL,TotalPrice,orderby=TotalPrice) 
    for row in rows: print row.flowers.name, row._extra[TotalPrice] 
    

    What to do to implement my own function ?
    Ah ….. Looking at sql.py, I found that very easy ….. that’s why I like web2py 😀

    This example should make things clear

    db.define_table('koko', Field('toto', 'integer'))
    db.koko.lolo.insert('toto=5')
    db.koko.insert(toto=10)
    
    from gluon.sql import Expression
    ex = Expression('toto*5', type='integer')
    
    def mulby5():
      return ex
    
    db.koko.toto.mulby5=mulby5
    rows = db(db.koko.id>0).select(db.koko.ALL, db.koko.toto.mulby5())
    for row in rows:print row.koko.id, row.koko.toto, row._extra[db.koko.toto.mulby5()]
    1 5 25
    2 10 50
    

    You can get things done with using ‘Expression’, just make the function return the the string directly.

    1 thing to Note though, because you’ve added to the Rows object resulting from the query an extra payload, then when you want to select a field from it , you should specify the table name explicitly.


    Update record/records
    You can use :

    db(condition).update(my_field=value)
    # db() is a Set object and you'll end up manipulating the field 'my_field' in all records in this set with the specified 'value'
    # set 
    

    you can update only one row [one selected row] using the update_Record function.

    rows =  db(...).select()
    for row in rows:
        row.update_record(my_field=value)
    # or
    db(...).select().first().update_record(my_field=value)
    

    So Remeber :-

    Set.update()
    row.update_record()
    

    Ex:-

    # model
    db.define_table('node', Field('article_id'), Field('rgt', 'integer'))
    db.define_table('comment', Field('article_id', requires=IS_IN_DB(db, db.node.article_id)), Field('lft', 'integer'))
    
    # data inserted in db tables
    
    node.id       node.article_id        node.rgt
    10                   1                   1
    11                   2                   2
    12                   3                   3
    
    comment.id	   comment.article_id      comment.lft
    8                     1                      1
    9                     2                      2
    10                    3                      4
    
    # code in controller:
    for row in db((db.comment.article_id==db.node.article_id)&
                     (db.comment.lft>=db.node.rgt)).select(db.comment.ALL):
        row.update_record(lft=6)
    
    # result
    comment.id	   comment.article_id       comment.lft
    8                       1                     1
    9                       2                     2
    10                      3                     6
    
    

    In fact as some of you may have not noticed ….. db(….=….) makes an automatic join so doing something like:

    rows = db((db.comment.article_id==db.node.article_id)&
                     (db.comment.lft>=db.node.rgt)).select()
    

    you’ll never be able to make something like:

    for row in rows:
       row.update_record(..=...)
    

    why ? because in this case you’ll have an automatic join and you’ll end up with something like:

    <Row {'comment': <Row {'update_record': <function <lambda> at 0x927fc6c>, 'lft': 1, 'article_id': '1', 'id': 8, 'delete_record': <function <lambda> at 0x927f764>}>, 'node': <Row {'update_record': <function <lambda> at 0x927fc34>, 'rgt': 1, 'article_id': '1', 'id': 10, 'delete_record': <function <lambda> at 0x927ff7c>}>}>
    
    

    so in order to use update_record you’ll need to specify which row to use it from explicitly so you’ll end up with some thing like the following code in order to make things work:

    for row in db((db.comment.article_id==db.node.article_id)&
                     (db.comment.lft>=db.node.rgt)).select():
            row.comment.update_record(lft=row.node.rgt)
    

    Of course you don’t need this if you just select records out of one table
    so you can safely do the following :-

     row = db(db.person.id > 2).select().first()
     row.update_record(name='Curt')l
    

    In fact you can use another trick in order to update the row which is :-

    db.executesql('update comment, node set lft=lft+2 where %s;' %str(db.comment.article_id==db.node.article_id))
    

    which mixes SQL with web2py-SQL but is safe from SQL injections.
    Warning :
    Using the previous query with SQLITE, I got that error :

    OperationalError: near ",": syntax error
    

    If some body knows what’s wrong and how it can be fixed, please let me know.
    My guess is that it can be done another way, But I’m not An SQL guru.

    To summarize:
    update_record is very very useful in situations like if you want to use the old value of a field in a row to make some calculations then update it.

    rows=db(db.page.id==1).select() 
    rows.first().update_record(counter = rows.first().counter + 1) 
    

    or you can make it like:

    query=db.table.field == 'whatever' 
    db.executesql('UPDATE ..... WHERE %s' % str(query)) 
    

    Another cool thing that I want to talk about is the flexibility of DAL for dealing with queries.

    in fact :-

    db.table.fiesld == value
    

    produces a Set object …. interesting right ?!!!! 😀
    yes a Set object that can be Anded or ORed
    so you can easily mix many queries together
    ex:

      query = db.table.company_id>5
      query2 = auth.accessible_query('update',  db.company, auth.user.id)
      query3 = auth.accessible_query('owner', db.company, auth.user.id)
      query4 = (query2 | query3)
      result = db(query&query4).select()
    

    auth.accessible_query -> returns records with specific permissions
    As you can see I get either the the records in db.company that the current logged in user has the permission ‘update’ or ‘owner’ on them , by using 2 queries that are gonna be ORed another query which in its turn is Anded with another query.
    This’s beautiful ….. right ?


    you can even do somethings like :

    # model
    db.define_table('node', Field('article_id'), Field('rgt', 'integer'))
    db.define_table('comment', Field('article_id', requires=IS_IN_DB(db, db.node.article_id)), Field('lft', 'integer'), Field('order', 'string'))
    
    # data :
    comment.id comment.article_id comment.lft comment.order
    1               1                     1        None
    2               2                     2        None
    3               3                     4        None
    4               1                     1        a
    5               2                     2        d
    6               3                     2        b
    7               2                     1        Z
    8               3                     1	 -
    9               2                     2        -
    
    # controller
    def index():
        myorder=db.comment.order.upper()|db.comment.id
        print db().select(db.comment.ALL,orderby=myorder)
    
    # result
    comment.id,comment.article_id,comment.lft,comment.order
    1,           1,                  1,           <NULL>
    2,           2,                  2,           <NULL>
    3,           3,                  4,           <NULL>
    8,           3,                  1,
    9,           2,                  2,
    4,           1,                  1,             a
    6,           3,                  2,             b
    5,           2,                  2,             d
    7,           2,                  1,             Z
    

    BTW : groupby can be dealt with in the same manner



    Another neat example on using sub-queries and query sets:-

    Q:
    Suppose you’ve a list of strings representing search items that you want to search database for strings containing tem or ‘LIKE them’.
    The list is variable though, meaning that it can hold another search items in future what can I do then?

    search_terms = ['..', '..', '..', ...]
    for i,st in enumerate(search_terms):
           subquery=db.company.name.lower().like('%'+st.lower()+'%')
           query=query|subquery if i else subquery
    companies = db(db.company.id.belongs(db(query)._select
    (db.company.id))).select() 
    

    see the query=query|subquery if i else subquery 😀 interesting right ?!!
    for python newbies … this has the effect of choosing
    query = subquery in the first iteration since i = 0
    then every time query and subquery are ORed resulting in a query set that will be ORed with the next subquery. Coooool right ?!!


    Some other interesting stuff
    The following code is interesting but I don’t know how to make use of it, ..If some body has an idea, please let me know.

    s=db()
    s.sql_w
    s=s(db.users.id>1)
    s.sql_w
    'users.id>1'
    s=s(db.users.id<=4)
    s.sql_w
    '(users.id>1 AND users.id<=4)'
    

    i

    in fact both of the following queries are equivalent:

    db = DAL('sqlite:memory:')  # interesting right ?!!!
    db.define_table('x', Field('y')
    db.x.insert(y=9)
    db.x.insert(y=10)
    db.commit()
     str(db(db.x.id>0).select()) 
     'x.id,x.y\r\n1,9\r\n2,10\r\n'
    

    oh by the way making db((..) & (..)) is equivalent to db(…)(…) which is happening in the code above s= s() then s = s() => s()()

    str(db().select()) gets a comma separated values of the result interesting right ?!!
    it’s useful by the way in the shell when you’re testing where there’s no tables, just objects 😀

    db(db.table.field … value) is equivalent to db(‘table.value … value’) more interesting right ?

    This’s very interesting in deed, in the sense that enables you to make complicated queries:
    Ex:
    instead of making something like:

    sql_string = 'SELECT * from message where 
    distance_sphere(GeometryFromText(\'POINT(\' || longitude || \' \' || 
    latitude || \')\', 2), GeometryFromText(\'POINT(-122.415686 
    37.799724)\', 2)) > 105;' 
    records = db.executesql(sql_string) 
    return dict(records=records) 
    

    you can do :

    cond="distance_sphere(GeometryFromText(\'POINT(\' || longitude || \'   
    \' ||latitude || \')\', 2), GeometryFromText(\'POINT 
    (-122.41568637.799724)\', 2)) > 105" 
    records=db(cond).select(db.message.ALL) 
    

    Oh, By the way the strings are having their quotes escaped to prevent sql injection attacks, this’s required if you’re going to use executesql() directly
    I found this very cool what about you ?!!! 😀

    Now I think it’s the time to generalize this for more and deep understanding
    In the web2py foder you can get an application shell using:

    python web2py.py -S -M
    

    Now you can do:

    db.define_table('test', Field('x'))
    db.test.insert(x='hey "hamdy", how're ya')
    # in the shell don't forget to commit so that effects can be reflected
    # commits are automatic from within web2py not from within the shell, they've to be explicit
    db.commit() 
    query = db(db.test.x == 'hey "hamdy", how are ya')
    query.sql_w
    # the result :
    'test.x=\'hey "hamdy", how are ya\''
    # quotes are escaped right ?
    

    Now what if the query string is complicated enough and it’s a bare sql query, how can I use it safely without the need for using executesql() which is not safe at most times unless you took care and escaped your strings properly ?

    in my case , I want to select the field’s x value from table test, so I can make my query string like:

    cond = """x = 'hey "hamdy", how are ya'"""
    or
    cond = """test.x = 'hey "hamdy", how are ya'"""
    

    Now when I want to select from database I can play it easily like:

    str(db(cond).select(db.test.ALL))
    # result
    'test.id,test.x\r\n1,"hey ""hamdy"", how are ya"\r\n'
    

    Note that I put the query string inside the db(), then chose the table to select from , in the select()
    and it worked.
    You don’t have to choose [this’s optional in this case] table inside the select() if you provided it in the string .
    This’s not gonna work though:

    cond = """db.test.x = 'hey "hamdy", how are ya'"""
     str(db(cond).select(db.test.ALL))
    

    since in the case of specifying a query string don’t say :

    'db.table.field ....'
    

    we’re not using a pure DAL syntax here
    instead say:

    'table.field. ....'
    or
    'field. ...'
    

    You may instead do:

    db.test.x == 'hey "hamdy", how are ya'
    <gluon.sql.Query object at 0x257bcd0>
    
    str(db(cond).select())
    'test.id,test.x\r\n1,"hey ""hamdy"", how are ya"\r\n'
    
    Confusing ? !!!  😀
    it's so simple, as a role of thumb you can either do:
    db.table.field == value
    db.table.field > value
    or 
    'table.field=value'
    'field=value'
    # string can't have == inside it but can contain any other conditional operators
    

    According to this post which explains how to check whether cache is functioning or not, I used the same trick to explain another interesting thing that I like very much.
    All of us knows that select() statements can be cached using something like:

    db(...).select(..cache=(cache.ram,3600)) #3600 is the expiration time
    

    Now imagine with me this scenario : you have a set of records that are cached in some controller action [say index of your web application]
    those records represent recently added products if your web application is all about an online store.
    Now suppose you’ve another controller function that add new products to your online store.
    What’s the problem here ?
    It’s that you need the cache to be flushed whenever inserting new record .
    Can this be done ?
    Sure, otherwise I wouldn’t have talking about it.
    How?
    You just need something to refresh the cache.Something to enforce reading database and re-caching results again

    ex:

    #model:
    db.define_table('product', Field('x'))
    
    #controller
    def index():
        rows = db().select(db.product.ALL, cache=(cache.ram, 600000))
        if db._lastsql:
            print 'First time or not cached'
        else:
            print 'cached'
        
        return dict(rows=rows)
    
    def insert():
        db.product.insert(x='3')
        rows = db().select(db.product.ALL, cache=(cache.ram, -1))
        return dict(rows=rows)
    
    

    Now before doing any thing just add records to database using appadmin interface.

    product.id    product.x
    1                  1
    2                  1
    3                  3
    

    Now go to the application’s index page
    You’ll have the statement “First time or not cached” printed indicating first time querying database.
    And you’ll get the table:

    product.id    product.x
    1                  1
    2                  1
    3                  3
    

    By keeping refreshing the page you only get the same results and the statement “cached” is printed every time.
    So records are cached for a long period of time
    Now go to the insert page that inserts another record
    the result will be:

    product.id    product.x
    1                  1
    2                  1
    3                  3
    4                  3  # newly inserted
    

    by going to index one last time you’ll see the updated table and you’ll have the statement “cached” being printed even when refreshing the page.

    Explanation

    rows = db().select(db.product, cache=(cache.ram, -1))
    

    checks if the same select() statement was issued before and having some cached records , then it will just refresh the cache [re-read database] then records will continue to be cached for a time equal to the original time used for caching them (600000) in our case.
    If not, they’ll be re-read .

    Now close web2py then re-run it [to flush all caches and starting over]
    try this code:

    def insert():
        db.product.insert(x='3')
        rows = db().select(db.product.ALL, cache=(cache.ram, -1))
        if db._lastsql:
            print 'First time or not cached'
        else:
            print 'cached'
            
        return dict(rows=rows)
    

    Now only go to the insert page and keep refreshing page.
    what you see ?
    Correct , the statement “First time or not cached” keeps printed
    so there’s no real cache. database is re-read and if records were previously cached , cache is renewed.

    both 0, and -1 are correct (cache.ram, 0) or (cache.ram, -1) but -1 is always the safe choice.


    Another thing that is trivial to mention but it’s here anyway for any newbie guy that may not know it:
    any insert() statement like this one:

     db.table.insert(field_name=value)
    

    returns back the id of the newly inserted record so that you may use it
    again.

    Also any update like :

    d(...).update(field_name=vlue, field2_name=value2,...)
    

    statement returns back 0 if there’s no records updated [condition is not True for all the records in the table] and if condition is True the number returned is the number of records that are updated.
    Why would this be helpful ?
    Because sometimes you get situations in which for example you like to update some user records then if update is successful, you’l have to send notification mails for those users.

    delete() has the same behavior as update

    In both update() & delete(), this can be done using the following line of code in gluon/sql.py:

    try:
        counter = self._db._cursor.rowcount
    except:
        counter =  None
    

    truncate

    Imagine the following situation…. You’ve deleted a record in some table which id is referred to by another table and let’s say the referenced field is not a foreign key and in defining the table that includes that record no ‘ondelete=CASCADE’ was specified and thus when deleting the record , all other record tables referring to it will not be deleted automatically.

    Let’s imagine that id of the deleted record was 1000 , and then imagine that you entered a new record in that table ..what id it should take?
    if it took 1000 then you’re in a mess right ? many records that has no relation with it will just refer to it right ?
    To prevent this …. any new record that will be inserted into database will take a new id that was not taken before so our luckily, the new id will take the id 1001 even if you’ve deleted all table records and it includes nothing …
    If you want to reset table to original state , delete all records and reset counters use:

     db.field_name.truncate()
    

    and congratulations !!! you got a brand new table


    Can I make a dummy database for just testing the validation of a form and database query generations?

    Yes sure !!!

    # Model
     db = DAL(None)
    db.define_table('members', Field('name', requires=IS_NOT_EMPTY()))
    
    #Controller
    def index():
        form = SQLFORM(db.members)
        if form.accepts(request.vars, session):
            response.flash = T('hey it worked')
        rows=db().select(db.members.ALL)
        print db._lastsql
       return dict(form=form)
    

    Congratulations !!! without need to introduce extra/new syntax.
    Internally DAL(None) behaves like a sqlite db and you can use it test query generations as well but there is nosqlite file so nothing is stored and no overhead.


    Self referencing tables …!!
    DAL should allow self referencing tables. This is not a DAL issue, this is an SQL issue. There is in fact a logical problem in inserting the first record.
    Actually web2py allows you to create self referencing tables.
    Don’t make an automatic reference, just use instead of a field of type reference a one of the type ‘integer’ then you can, optionally, use the IS_IN_DB validator.
    For a parent record you may use 0, or -1 but [-1 is always the safer choice]
    ex:

    db.define_table('class', Field('name'), Field('parent_class', 'integer'))
    

    then inserting records in parent_class based manually, and insert (-1) if parent_class is ‘Object’ 😀
    A better trick to do this would be:

    db.define_table('class', Field('name'), Field('parent_class', 'integer'))
    db.class.parent_class.type='reference class' 
    Also set db.class.parent_class.requires=IS_NULL_OR(IS_IN_DB 
    (db,'class.id','%(name)s')) 
    

    Now see the IS_NULL_OR(IS_IN_DB()) …. interesting right ? !!!!!! 😀
    Now you can insert object and just don’t insert a parent for it.


    Q:

    Is there any other straight and plain way to get the result? Actually, I have a table with many columns, what I want is select a row which has a max id. As my understood, the db.table.id.max() should return
    directly the max value in column ‘id’. and then I can use ‘db(db.table.id==maxid).select()’ to select the row I want.

    A:

    db().select(db.mytable.ALL,orderby=~db.mytable.id,limitby=(0,1)) 
    

    A hint:
    Please, take care of not using a reserved SQL keywords as a table names or field names, RDBMSes vary and there’re lots of reserved keywords , some of them specific to some RDBMSes
    so take care.


    Abstract tables

    db=DAL() 
    db.define_table('a',Field('name')) 
    db.define_table('b',Field('other')) 
    db.define_table('c',Field('test'),db.a,db.b) 
    print db.c.fields 
    
    ['id','test','name','other'] 
    
    db=DAL() 
    from gluon.sql import SQLTable 
    abstract=SQLTable(None,Field('name')) 
    db.define_table('mytable',abstract,Field('other')) 
    print db.mytable.fields 
    ['id','name','other']
    

    Abstract table is a table that is defined but does not exist in the DB. It can be used to define derived classes that include the same fields. For example:

    db=DAL() 
    from gluon.sql import SQLTable 
    person=SQLTable(None,Field('name'),Field('address')) 
    db.define_table('student',person,Field('degree')) 
    db.define_table('teacher',person,Field('subject')) 
    

    Here table person is not in the DB, but student and teacher are. The
    latter have the fields defined in person plus others.
    Do not abuse this. Often this is better to do:

    db=DAL() 
    db.define_table('person',Field('name'),Field('address')) 
    db.define_table('student',Field('person',db.person),Field('degree')) 
    db.define_table('teacher',Field('person',db.person),Field('subject'))
    
    # Field('person', db.person) 
       is the same as
    # Field('person', 'references person')
    
    

    if a student references a person but it does not have the fields of a
    person.


    Some sweet and light tricks :

    To check the SQL syntax of different dbms :-

    python web2py.py -S welcome 
    db=DAL(None) 
    db.define_table('person',Field('name')) 
    for n in ['sqlite','postgresql','mysql','oracle','mssql','firbird']: 
        db['_dbname']=n 
        print db()._select(db.person.name,limitby=(0,10)) 
    

    Result:

    SELECT person.name FROM person LIMIT 10 OFFSET 0; 
    SELECT person.name FROM person LIMIT 10 OFFSET 0; 
    SELECT person.name FROM person LIMIT 10 OFFSET 0; 
    
    SELECT person.name FROM (SELECT _tmp.*, ROWNUM _row FROM (SELECT person.name FROM person ORDER BY person.id) _tmp WHERE ROWNUM<10 )  WHERE _row>=0; 
    
    SELECT TOP 10 person.name FROM person ORDER BY person.id; 
    SELECT person.name FROM person LIMIT 10 OFFSET 0; 
    

    d2=SQLDB('sqlite://d2.db') 
    d2.define_table('person',SQLField('name')) 
    d2.person._referenced_by 
    [] 
    
    d2.define_table('dog',SQLField('name'),SQLField('owner',d2.person)) 
    d2.person._referenced_by 
    [('dog', 'owner')] 
    
    d2.dog.drop() 
    d2.person._referenced_by 
    []
    

    Q:

    How Do I use IIS with web2py?

    A:

    • Run IIS as a proxy and redirect to the web2py web server. This
      solution should work out of the box
    • Use this or that
    • Use fast cgi

    7 Responses to Some tricks to use when dealing with databases

    1. Richard says:

      another great article – you are the PyMOTW of web2py!

    2. web2py says:

      Thank you Richard 🙂

    3. Jay says:

      Thanks, this answers a lot of DAL questions. Very nicely done!

      I have succeeded in using DAL in code out site of web2py. I use it mostly for long running processing jobs (daemons) in conjunction with web2py. Switching from straight SQL to DAL reduced the code by 30-40% and increased the clarity by a huge margin. This will allow use of DAL in even more cases.

      Jay

    4. web2py says:

      Am really glad that it helps 🙂
      and You’re welcome

    5. Tritz says:

      I was {recommended|suggested} this {blog|website|web site} by my cousin. {I am|I’m} not sure whether this post is written by him as {no one|nobody} else know such detailed about my {problem|difficulty|trouble}. {You are|You’re} {amazing|wonderful|i…

      Great blog here! Also your site loads up very fast! What host are you using? Can I get your affiliate link to your host? I wish my web site loaded up as quickly as yours lol…

    6. Bill says:

      I think thet IS_NULL_OR(…) is now IS_EMPTY_OR(…)

      • web2py says:

        Hello Bill, actually this is an old blog 🙂 I don’t use web2py for a long time now.but who knows I may come back and update this stuff one day.

    Leave a comment