Some tricks to use when dealing with databases

May 23, 2010


  • web2py’s only knwoledge about table structure is from web2py itself.
    to be more specific, from the files created in the folder databases in side web2py’s main directory.
    If you change the tables outside web2py, web2py gets confused, and you have set migrate=False.
  • If your database connection is not established, even by specifying the right connection string, make sure you’ve specified the right port in the connection string, your database may be running on a different port other than the default one.
  • to use a field of type reference, you’ve to reference another field in a table existing in the same database, you can’t reference a field in a table in another database, use field of type ‘integer’ instead and becareful with your queries.
  • It is actually easy to add a new DB. everything is one file gluon/
    • find a python module that interface to MaxDB
    • identify the connection strings
    • add translation strings (to of the gluon/ file
    • run the tests (python gluon/
  • Database select syntax:

    So you can make something like:

     sel=[db.address[field] for field in fields] 
     rows=dbref(qry).select(*sel) # note the * 
  • Note that : db((q1) & (q2)).select() is equivalent to db(q1)(q2)


    Notice that you can can also define :

         row=people_and_their_companies( (0)).select().first() 

    and wow !!! congratulations you’ve just made an automatic inner join :
    then you can do :
  • How to define a table with many number of fields easily?

    Do something like this:

    fields=[Field('addr%02i' %i,'string') for i in range(21)]
  • If multiple applications access the same database, for every table,
    only one app should have migrate=True so you know who is altering tables. It is not necessary that all apps have the same define_table as long as each define_table defines a subset of existing fields.
    This is not a problem as long as migrate=False.
    If you want one app to change a table definition and all apps to
    reflect that change, yes, you have to change them all.
    This is because changing the table definition may break an app.

  • Difference between requires, required, and notnull

  • requires

    enforces validation on the form level.
    Most of us are well experienced with it, so no need for more talking about it.

  • # model
    db.define_table('my_table', Field('soso', 'string', notnull=True))
    def index():
        form = SQLFORM(db.my_table)
        if form.accepts(request.vars, session, dbio=False):
            response.flash = T('Accepted')
        elif form.errors:
            response.flash = T('Not sccepted')
        rows = db(>0).select()
        return dict(form=form, rows=rows)

    In this example, I used dbio=False with the accepts() function to make automatic database manipulation stop.
    Now, I’ve full control to do what I want to do.
    Trying to do:

    while using notnull=True, in table definition in my model, will cause the database to refuse the insertion and you’ll get the error flash message:
    ‘not accepted’ as well as an error message ‘enter a value’.
    This shows you that this validation is enforced on the [database level].

    In fact if you changed the code to :

    if form.accepts(request.vars, session):
         response.flash = ...
    elif form.errors:
         response.flash = ...

    You’ll get the same error when trying to submit the form without any value in it.
    In the former example using dbio=False even if you’re inserting any value it will fail since am always trying to insert Null into the database field using insert() manually.
    I just wanted to show you that validation is enforced on insert() and on database not form.

  • required=True

    Is enforced at the level of db.table.insert, tells web2py that some value (including “” and Null) has to be specified.
    So the database will accept Null values and empty strings.

    db.define_table('my_', Field('soso',required=True), Field('toto'), Field('lolo'))
    def index():
        form = SQLFORM(db.my_)
        if form.accepts(request.vars, session, dbio=False):
            db.my_.insert(soso='')  # ok
            db.my_.insert(soso=None, lolo='ss', toto='')      #ok
            db.my_.insert(lolo='ss', toto='')  # Not OK 'required field not specified'
            response.flash = T('Accepted')
        elif form.errors:
            response.flash = T('Not sccepted')
        rows =db(>0).select()
        return dict(form=form, s=rows)

    In the previous example, you see that a required field has to be specified when ever using insert statement.

  • Using db.executesql()

    Sometimes you just need to execute sql your self, then db.executesql happens to be handy in those situations.

    rawrows=db.executesql(yoursql) -> returns [(col_name, value), ((col2_name, value)), ((col3_name, value))]

    Notice that rows are not parsed therefore you cannot say
    rawrows[i].colname but you have to say :
    rawrows[i][j], where j is the col number as returned by the query.
    This may seem confusing in the beginning but in fact if you made something like:

    r=db.executesql('select cola,colb from mytable;')

    r[0][0] is cols and r[0][1] is col b. It does not matter in which
    order they were in the table definition[All that amatters is the order with which you made the query].


    db.define_table('my_table', Field('soso'), Field('toto'), Field('lolo'))
    # supposing you've entered data[soso='a', toto='b', lolo='c'] in table fields using appadmin interface .
    def index():    
        r = db.executesql('select soso, lolo from my_table')
        print r  # [(u'a', u'c')]
        print r[0][0], r[0][1] #a c
        return dict()

    One thing to note when using executesql, is that you need to escape strings so that you’re not vulnerable to sql injection attacks.
    though, when you’re using DAL functions, you don’t need to do this since it’s automatic.

    You can do things like:


    Can I have a random orderby directly using web2py DAL?
    yes, use :


    Interesting right ? 😀

    An example showing how can indexes speed things up

    I defined a model with two tables, let web2py create them (tried this
    with both sqlite and mysql), and then imported data (about 1,400 rows
    and 140,000 rows, respectively) into the tables using external
    scripts. After doing so I found my app became very slow – it took
    10-15 seconds to respond. Then I tried the csv import function in
    web2py, which worked, and there is no slowness at all. Why does this

    The one thing that can make it slow is the absence of indices. You
    need to create indices on fields you use for searching. If you use
    sqlite you can do, after define_table:

    db.executesql('CREATE INDEX IF NOT EXISTS ON table colname;')

    For postgres, mysql, and oracle you are better off creating the index
    outside web2py.

    What else other than not creating indexes when I’ve a big number of records could affect the performance of my database ?

    If you’re making joins using the native SQL syntax ma be using executesql(), then , the db may be performing your joins
    in the order you’ve specified. This is a SQL standard, and in those cases there will not be any optimizations. Therefore, when ever you think about making joins using Traditional SQL, make sure that you do the most discriminating joins first (the ones smallest set of results).

    Using DAL: The query planner in the db may be optimizing the query for you and order of joins isn’t so important.

    What would be the simplest way to create an entry form for a new record, pre-filled with the fields from another record in the same
    table (knowing that record’s id) ?


    and the id of the source record is source_id, you can do:

    # get the record 
    # set the default for all fields by ID 
    for fieldname in db.mytable.fields: 
          if fieldname!='id': db.mytable[fieldname].default=row[fieldname] 

    Can you give me some examples on using (joins) in DAL ?

  • Ex:1
    Given :

    db.define_table('children',   Field('user_name' )) 
    db.define_table('child_profiles', Field('child_id',  db.children), Field('first_name' ), ...) 
    db.define_table('buddies', Field('buddy_id' , db.children),Field('child_id' , db.children), ..) 

    You can do :

    child_id=....(may be request.args(0)) #or anything else according to the logic you're using in your application.
    # db(cond1)(cond2) is the same as db((cond1)&(cond2))
     for row in rows:
        print row.childer.user_name, 
  • Ex:2

    I have 2 tables, “owner” and “dog”. The “dog” table has a many to one relation to owners. If I have an an owner id, how can I make a join based on the owner id?
    I did this:

    present_owner = db(( &

    But when I wanted to retrieve data it traced back, for example:
    What is the proper way to retrieve the data from the join?


    * For your query to look more like a join, you may rewrite it as :

     rows = db(( &

    rows now contain:

    owner = rows[0].owner (rows.first().owner) # an owner
    dog = rows[0].dog  (rows.first().dog)      # a dog that belongs to that owner.
    # then you can get more info using,, .......
  • Ex:3
    Q: How 2 make 2 inner joins using the following tables?

    db.define_table('table1', Field('name'))
    db.define_table('table2', Field('table1_id', 'table3_id'))
    db.define_table('table3', Field('name'))

    Ans :

    rows= db((
    for row in rows:
  • Q: Does DAL support outer joins?

    Yes, web2py does left outer joins.BUT they require an explicit “left” keyword otherwise you’re trying to make an inner join.


    no need for right joins since the functionality is the same as left joins .
    You can check the manual for discussions and examples about that .

    In fact, you could do something like:

    db.define_table('action_queue', Field('user_id','integer')) 
    db.define_table('unprocessed', Field('action_queue_id','integer')) 
    db ( (left=db.unprocessed) 

    BUT : This is an old notation which is supported only for backward comaptibility and works on sqlite. It should be:

    db ().select(db.action.ALL,db.unprocessed.ALL,left=db.unprocessed.on 

  • Q: How to use sub-queries?

    q = using db()._select()
    print q

    you can get the real SQL query used by DAL to perform some sort of selection .
    Using the same concept suppose you’ve :-

    db.define_table('a', Field('f1'))
    db.define_table('b', Field('f1'), Field('f2'))
    # a includes the following records:   a.f1
    3       1
    4       2
    5       3
    6       4
    # b includes the following records:	b.f1	b.f2
    3       1        y
    4       4        x

    Now suppose you want to perform the following query :

    SELECT b.f2 FROM b WHERE b.f1 IN (SELECT a.f1 FROM a);

    So what to do ?

    you’ve 2 options :

    • using :
      db.executesql('SELECT b.f2 FROM b WHERE b.f1 IN (SELECT a.f1 FROM a);')
    • Generate the sub-query using db()._select(), then use it in another query using DAL syntax :
      def index():   
         q = db()._select(db.a.f1)
         rows = db(db.b.f1.belongs(q)).select(db.b.f2)
         print db._lastsql
         return dict(rows=rows)

    great !!! and db._lastsql will print exactly the same query you’ wanted:

    SELECT b.f2 FROM b WHERE b.f1 IN (SELECT a.f1 FROM a);

    and the answer is :-

  • belongs can accept a set or a “select…” string and _select() as you may have noticed creates a query string.


    I’ve got three tables: “companies”, “region”, and “items”.
    In “companies” there is a “region_id” column. In “items” there are “companies_id” and “price” columns. Is there a simple way to select the top 10 companies which belong to a particular region and whose item prices sum to the highest amount?


    First, check the “Grouping and counting section in this chapter of the manual”

    Try this:
    (,'sum(items.price)',groupby=db.items.company_id,orderby='sum(items.price)  desc',limitby=(0,10))
    for row in rows:

    A better way to do it a more recent way actually[the previous will work and still work for backward compatibility]:

    you may make:

    summation = db.items.price.sum()
    ..... .select(db.companies.ALL,summation,groupby=.....
    for row in rows:

    The first method is always there to help if the function you’re trying to use is not implemented(yet) in web2py.

    in fact doing something like:

    str(db.table.field.max())  # assuming field is numeric

    will print:


    Wooooooooooooooow, so they’re the same

    So To understand this, I’ll summarize:

    some backend SQL DBMS support some functions and don’t support others
    if function is supported by all backend DBMS supported by web2py, the function will easily be implemented as:


    if not , you can still use it [assuming you know what to do and the DBMS you’ll always use support it]
    using a query string directly that specify your query

    So in the following example, you’ll get things better :

    I’m using web2py with PostgreSQL. I can use ‘like’ and ‘belongs’ to construct simple SQL query, but I didn’t find anything equivalent to regular expression matching operator ~ or ~*. Did I miss something here? Or do I have to use Python to do regex on results returned from SQL?


    The fact is that postgresql supports it but many backends do not therefore there it no API for it. You have two choices :

    • write the query in SQL:

      query="table.field SIMILAR TO '(a|b)'" 
      do the pattern macthing in python on the returned values :

          rows.response=[x for i,x in enumerate(rows.response) if 
  • Take another example to make sure you got all this stuff ? OK and you’re welcome

    Assuming I’m having:

    # Model:-
    db.define_table('members', Field('name'))
    db.define_table('resources', Field('resource_tier', 'integer' ),
                                  Field('resource_amount', 'integer'))
    db.define_table('deposits', Field('depositer_id', 'reference members',
                                                              requires=IS_IN_DB(db,, '%(name)s')),   
                                                    Field('resource_id', 'reference resources',

    How can I translate this kind of sql statements:

    SELECT, sum(resources.resource_amount) FROM resources, deposits, members WHERE (( AND AND resources.resource_tier=0) GROUP BY deposits.depositer_id;


    summation = db.resources.resource_amount.sum()
        rows = rows=db(( 
        for row in rows: print,row._extra[summation] 

    Now time to another question on the same example:

    How could I translate this to web2py syntax ?

    SELECT, SUM(resource_amount*(1-ABS(resource_tier))), SUM(resource_amount*(1-ABS(resource_tier-1))), SUM(resource_amount*(1-ABS(resource_tier-2))) FROM members LEFT JOIN deposits,resources WHERE ( AND GROUP BY;


        print sqlrows

    Having 2 fields of type datetime, How can I order records according to the difference between them?


    rows = db (( & (>d1)& (<d2)).select (, db.c.ALL, 
    (,orderby='DATEDIFF(c.c_time, ck.c_time)') 
    for row in rows: print ( 

    DATEDIFF() should be supported by your dbms


       Field('price', 'float'), 
       Field('quantity', 'integer')) 

    I want to create a column called TotalPrice that multiplies price and quantity.


    for row in rows: print, row._extra[TotalPrice] 

    What to do to implement my own function ?
    Ah ….. Looking at, I found that very easy ….. that’s why I like web2py 😀

    This example should make things clear

    db.define_table('koko', Field('toto', 'integer'))
    from gluon.sql import Expression
    ex = Expression('toto*5', type='integer')
    def mulby5():
      return ex
    rows = db(>0).select(db.koko.ALL, db.koko.toto.mulby5())
    for row in rows:print, row.koko.toto, row._extra[db.koko.toto.mulby5()]
    1 5 25
    2 10 50

    You can get things done with using ‘Expression’, just make the function return the the string directly.

    1 thing to Note though, because you’ve added to the Rows object resulting from the query an extra payload, then when you want to select a field from it , you should specify the table name explicitly.

    Update record/records
    You can use :

    # db() is a Set object and you'll end up manipulating the field 'my_field' in all records in this set with the specified 'value'
    # set 

    you can update only one row [one selected row] using the update_Record function.

    rows =  db(...).select()
    for row in rows:
    # or

    So Remeber :-



    # model
    db.define_table('node', Field('article_id'), Field('rgt', 'integer'))
    db.define_table('comment', Field('article_id', requires=IS_IN_DB(db, db.node.article_id)), Field('lft', 'integer'))
    # data inserted in db tables       node.article_id        node.rgt
    10                   1                   1
    11                   2                   2
    12                   3                   3	   comment.article_id      comment.lft
    8                     1                      1
    9                     2                      2
    10                    3                      4
    # code in controller:
    for row in db((db.comment.article_id==db.node.article_id)&
    # result	   comment.article_id       comment.lft
    8                       1                     1
    9                       2                     2
    10                      3                     6

    In fact as some of you may have not noticed ….. db(….=….) makes an automatic join so doing something like:

    rows = db((db.comment.article_id==db.node.article_id)&

    you’ll never be able to make something like:

    for row in rows:

    why ? because in this case you’ll have an automatic join and you’ll end up with something like:

    <Row {'comment': <Row {'update_record': <function <lambda> at 0x927fc6c>, 'lft': 1, 'article_id': '1', 'id': 8, 'delete_record': <function <lambda> at 0x927f764>}>, 'node': <Row {'update_record': <function <lambda> at 0x927fc34>, 'rgt': 1, 'article_id': '1', 'id': 10, 'delete_record': <function <lambda> at 0x927ff7c>}>}>

    so in order to use update_record you’ll need to specify which row to use it from explicitly so you’ll end up with some thing like the following code in order to make things work:

    for row in db((db.comment.article_id==db.node.article_id)&

    Of course you don’t need this if you just select records out of one table
    so you can safely do the following :-

     row = db( > 2).select().first()

    In fact you can use another trick in order to update the row which is :-

    db.executesql('update comment, node set lft=lft+2 where %s;' %str(db.comment.article_id==db.node.article_id))

    which mixes SQL with web2py-SQL but is safe from SQL injections.
    Warning :
    Using the previous query with SQLITE, I got that error :

    OperationalError: near ",": syntax error

    If some body knows what’s wrong and how it can be fixed, please let me know.
    My guess is that it can be done another way, But I’m not An SQL guru.

    To summarize:
    update_record is very very useful in situations like if you want to use the old value of a field in a row to make some calculations then update it.

    rows.first().update_record(counter = rows.first().counter + 1) 

    or you can make it like:

    query=db.table.field == 'whatever' 
    db.executesql('UPDATE ..... WHERE %s' % str(query)) 

    Another cool thing that I want to talk about is the flexibility of DAL for dealing with queries.

    in fact :-

    db.table.fiesld == value

    produces a Set object …. interesting right ?!!!! 😀
    yes a Set object that can be Anded or ORed
    so you can easily mix many queries together

      query = db.table.company_id>5
      query2 = auth.accessible_query('update',,
      query3 = auth.accessible_query('owner',,
      query4 = (query2 | query3)
      result = db(query&query4).select()

    auth.accessible_query -> returns records with specific permissions
    As you can see I get either the the records in that the current logged in user has the permission ‘update’ or ‘owner’ on them , by using 2 queries that are gonna be ORed another query which in its turn is Anded with another query.
    This’s beautiful ….. right ?

    you can even do somethings like :

    # model
    db.define_table('node', Field('article_id'), Field('rgt', 'integer'))
    db.define_table('comment', Field('article_id', requires=IS_IN_DB(db, db.node.article_id)), Field('lft', 'integer'), Field('order', 'string'))
    # data : comment.article_id comment.lft comment.order
    1               1                     1        None
    2               2                     2        None
    3               3                     4        None
    4               1                     1        a
    5               2                     2        d
    6               3                     2        b
    7               2                     1        Z
    8               3                     1	 -
    9               2                     2        -
    # controller
    def index():
        print db().select(db.comment.ALL,orderby=myorder)
    # result,comment.article_id,comment.lft,comment.order
    1,           1,                  1,           <NULL>
    2,           2,                  2,           <NULL>
    3,           3,                  4,           <NULL>
    8,           3,                  1,
    9,           2,                  2,
    4,           1,                  1,             a
    6,           3,                  2,             b
    5,           2,                  2,             d
    7,           2,                  1,             Z

    BTW : groupby can be dealt with in the same manner

    Another neat example on using sub-queries and query sets:-

    Suppose you’ve a list of strings representing search items that you want to search database for strings containing tem or ‘LIKE them’.
    The list is variable though, meaning that it can hold another search items in future what can I do then?

    search_terms = ['..', '..', '..', ...]
    for i,st in enumerate(search_terms):
           query=query|subquery if i else subquery
    companies = db(

    see the query=query|subquery if i else subquery 😀 interesting right ?!!
    for python newbies … this has the effect of choosing
    query = subquery in the first iteration since i = 0
    then every time query and subquery are ORed resulting in a query set that will be ORed with the next subquery. Coooool right ?!!

    Some other interesting stuff
    The following code is interesting but I don’t know how to make use of it, ..If some body has an idea, please let me know.

    '(>1 AND<=4)'


    in fact both of the following queries are equivalent:

    db = DAL('sqlite:memory:')  # interesting right ?!!!
    db.define_table('x', Field('y')

    oh by the way making db((..) & (..)) is equivalent to db(…)(…) which is happening in the code above s= s() then s = s() => s()()

    str(db().select()) gets a comma separated values of the result interesting right ?!!
    it’s useful by the way in the shell when you’re testing where there’s no tables, just objects 😀

    db(db.table.field … value) is equivalent to db(‘table.value … value’) more interesting right ?

    This’s very interesting in deed, in the sense that enables you to make complicated queries:
    instead of making something like:

    sql_string = 'SELECT * from message where 
    distance_sphere(GeometryFromText(\'POINT(\' || longitude || \' \' || 
    latitude || \')\', 2), GeometryFromText(\'POINT(-122.415686 
    37.799724)\', 2)) > 105;' 
    records = db.executesql(sql_string) 
    return dict(records=records) 

    you can do :

    cond="distance_sphere(GeometryFromText(\'POINT(\' || longitude || \'   
    \' ||latitude || \')\', 2), GeometryFromText(\'POINT 
    (-122.41568637.799724)\', 2)) > 105" 

    Oh, By the way the strings are having their quotes escaped to prevent sql injection attacks, this’s required if you’re going to use executesql() directly
    I found this very cool what about you ?!!! 😀

    Now I think it’s the time to generalize this for more and deep understanding
    In the web2py foder you can get an application shell using:

    python -S -M

    Now you can do:

    db.define_table('test', Field('x'))
    db.test.insert(x='hey "hamdy", how're ya')
    # in the shell don't forget to commit so that effects can be reflected
    # commits are automatic from within web2py not from within the shell, they've to be explicit
    query = db(db.test.x == 'hey "hamdy", how are ya')
    # the result :
    'test.x=\'hey "hamdy", how are ya\''
    # quotes are escaped right ?

    Now what if the query string is complicated enough and it’s a bare sql query, how can I use it safely without the need for using executesql() which is not safe at most times unless you took care and escaped your strings properly ?

    in my case , I want to select the field’s x value from table test, so I can make my query string like:

    cond = """x = 'hey "hamdy", how are ya'"""
    cond = """test.x = 'hey "hamdy", how are ya'"""

    Now when I want to select from database I can play it easily like:

    # result
    ',test.x\r\n1,"hey ""hamdy"", how are ya"\r\n'

    Note that I put the query string inside the db(), then chose the table to select from , in the select()
    and it worked.
    You don’t have to choose [this’s optional in this case] table inside the select() if you provided it in the string .
    This’s not gonna work though:

    cond = """db.test.x = 'hey "hamdy", how are ya'"""

    since in the case of specifying a query string don’t say :

    'db.table.field ....'

    we’re not using a pure DAL syntax here
    instead say:

    'table.field. ....'
    'field. ...'

    You may instead do:

    db.test.x == 'hey "hamdy", how are ya'
    <gluon.sql.Query object at 0x257bcd0>
    ',test.x\r\n1,"hey ""hamdy"", how are ya"\r\n'
    Confusing ? !!!  😀
    it's so simple, as a role of thumb you can either do:
    db.table.field == value
    db.table.field > value
    # string can't have == inside it but can contain any other conditional operators

    According to this post which explains how to check whether cache is functioning or not, I used the same trick to explain another interesting thing that I like very much.
    All of us knows that select() statements can be cached using something like:

    db(...).select(..cache=(cache.ram,3600)) #3600 is the expiration time

    Now imagine with me this scenario : you have a set of records that are cached in some controller action [say index of your web application]
    those records represent recently added products if your web application is all about an online store.
    Now suppose you’ve another controller function that add new products to your online store.
    What’s the problem here ?
    It’s that you need the cache to be flushed whenever inserting new record .
    Can this be done ?
    Sure, otherwise I wouldn’t have talking about it.
    You just need something to refresh the cache.Something to enforce reading database and re-caching results again


    db.define_table('product', Field('x'))
    def index():
        rows = db().select(db.product.ALL, cache=(cache.ram, 600000))
        if db._lastsql:
            print 'First time or not cached'
            print 'cached'
        return dict(rows=rows)
    def insert():
        rows = db().select(db.product.ALL, cache=(cache.ram, -1))
        return dict(rows=rows)

    Now before doing any thing just add records to database using appadmin interface.    product.x
    1                  1
    2                  1
    3                  3

    Now go to the application’s index page
    You’ll have the statement “First time or not cached” printed indicating first time querying database.
    And you’ll get the table:    product.x
    1                  1
    2                  1
    3                  3

    By keeping refreshing the page you only get the same results and the statement “cached” is printed every time.
    So records are cached for a long period of time
    Now go to the insert page that inserts another record
    the result will be:    product.x
    1                  1
    2                  1
    3                  3
    4                  3  # newly inserted

    by going to index one last time you’ll see the updated table and you’ll have the statement “cached” being printed even when refreshing the page.


    rows = db().select(db.product, cache=(cache.ram, -1))

    checks if the same select() statement was issued before and having some cached records , then it will just refresh the cache [re-read database] then records will continue to be cached for a time equal to the original time used for caching them (600000) in our case.
    If not, they’ll be re-read .

    Now close web2py then re-run it [to flush all caches and starting over]
    try this code:

    def insert():
        rows = db().select(db.product.ALL, cache=(cache.ram, -1))
        if db._lastsql:
            print 'First time or not cached'
            print 'cached'
        return dict(rows=rows)

    Now only go to the insert page and keep refreshing page.
    what you see ?
    Correct , the statement “First time or not cached” keeps printed
    so there’s no real cache. database is re-read and if records were previously cached , cache is renewed.

    both 0, and -1 are correct (cache.ram, 0) or (cache.ram, -1) but -1 is always the safe choice.

    Another thing that is trivial to mention but it’s here anyway for any newbie guy that may not know it:
    any insert() statement like this one:


    returns back the id of the newly inserted record so that you may use it

    Also any update like :

    d(...).update(field_name=vlue, field2_name=value2,...)

    statement returns back 0 if there’s no records updated [condition is not True for all the records in the table] and if condition is True the number returned is the number of records that are updated.
    Why would this be helpful ?
    Because sometimes you get situations in which for example you like to update some user records then if update is successful, you’l have to send notification mails for those users.

    delete() has the same behavior as update

    In both update() & delete(), this can be done using the following line of code in gluon/

        counter = self._db._cursor.rowcount
        counter =  None


    Imagine the following situation…. You’ve deleted a record in some table which id is referred to by another table and let’s say the referenced field is not a foreign key and in defining the table that includes that record no ‘ondelete=CASCADE’ was specified and thus when deleting the record , all other record tables referring to it will not be deleted automatically.

    Let’s imagine that id of the deleted record was 1000 , and then imagine that you entered a new record in that table ..what id it should take?
    if it took 1000 then you’re in a mess right ? many records that has no relation with it will just refer to it right ?
    To prevent this …. any new record that will be inserted into database will take a new id that was not taken before so our luckily, the new id will take the id 1001 even if you’ve deleted all table records and it includes nothing …
    If you want to reset table to original state , delete all records and reset counters use:


    and congratulations !!! you got a brand new table

    Can I make a dummy database for just testing the validation of a form and database query generations?

    Yes sure !!!

    # Model
     db = DAL(None)
    db.define_table('members', Field('name', requires=IS_NOT_EMPTY()))
    def index():
        form = SQLFORM(db.members)
        if form.accepts(request.vars, session):
            response.flash = T('hey it worked')
        print db._lastsql
       return dict(form=form)

    Congratulations !!! without need to introduce extra/new syntax.
    Internally DAL(None) behaves like a sqlite db and you can use it test query generations as well but there is nosqlite file so nothing is stored and no overhead.

    Self referencing tables …!!
    DAL should allow self referencing tables. This is not a DAL issue, this is an SQL issue. There is in fact a logical problem in inserting the first record.
    Actually web2py allows you to create self referencing tables.
    Don’t make an automatic reference, just use instead of a field of type reference a one of the type ‘integer’ then you can, optionally, use the IS_IN_DB validator.
    For a parent record you may use 0, or -1 but [-1 is always the safer choice]

    db.define_table('class', Field('name'), Field('parent_class', 'integer'))

    then inserting records in parent_class based manually, and insert (-1) if parent_class is ‘Object’ 😀
    A better trick to do this would be:

    db.define_table('class', Field('name'), Field('parent_class', 'integer'))
    db.class.parent_class.type='reference class' 
    Also set db.class.parent_class.requires=IS_NULL_OR(IS_IN_DB 

    Now see the IS_NULL_OR(IS_IN_DB()) …. interesting right ? !!!!!! 😀
    Now you can insert object and just don’t insert a parent for it.


    Is there any other straight and plain way to get the result? Actually, I have a table with many columns, what I want is select a row which has a max id. As my understood, the should return
    directly the max value in column ‘id’. and then I can use ‘db(’ to select the row I want.



    A hint:
    Please, take care of not using a reserved SQL keywords as a table names or field names, RDBMSes vary and there’re lots of reserved keywords , some of them specific to some RDBMSes
    so take care.

    Abstract tables

    print db.c.fields 
    from gluon.sql import SQLTable 
    print db.mytable.fields 

    Abstract table is a table that is defined but does not exist in the DB. It can be used to define derived classes that include the same fields. For example:

    from gluon.sql import SQLTable 

    Here table person is not in the DB, but student and teacher are. The
    latter have the fields defined in person plus others.
    Do not abuse this. Often this is better to do:

    # Field('person', db.person) 
       is the same as
    # Field('person', 'references person')

    if a student references a person but it does not have the fields of a

    Some sweet and light tricks :

    To check the SQL syntax of different dbms :-

    python -S welcome 
    for n in ['sqlite','postgresql','mysql','oracle','mssql','firbird']: 
        print db()._select(,limitby=(0,10)) 


    SELECT FROM person LIMIT 10 OFFSET 0; 
    SELECT FROM person LIMIT 10 OFFSET 0; 
    SELECT FROM person LIMIT 10 OFFSET 0; 
    SELECT TOP 10 FROM person ORDER BY; 
    SELECT FROM person LIMIT 10 OFFSET 0; 

    [('dog', 'owner')]  


    How Do I use IIS with web2py?


    • Run IIS as a proxy and redirect to the web2py web server. This
      solution should work out of the box
    • Use this or that
    • Use fast cgi


    May 10, 2010

    Q: response.flash or session.flash?

    It’s simple, just use response.flash in the normal cases [there’s no redirection]

    def index():
        response.flash = T('yes yes')
        return dict()

    If you’re going to redirect user from one page to another, then you’ve to use session.flash = ….. because it’s the easiest way to keep the message across multiple requests.

    def index():
        session.flash = T('yes yes')
        redirect(URL(request.application, 'default', 'function2'))
        eturn dict()

    As you see : using response.flash is not correct because the user is going to be redirected which means another request and another response.

    In fact, when a page is loaded if there is a session.flash it is copied into response.flash (so that it is displayed) and then reset to None
    why resetting session.flash to None ? so that message is displayed only once.

    So what if I’m going to do redirect then redirect again? how could I keep the flash message so that it’s displayed after the last redirection?

    If you redirect and redirect again the message is lost unless before the second redirection you to


    How to check that cache is functioning

    May 4, 2010
  • make an example like this:
  • def index():
        cached = db(;0).select(cache=(cache.ram,20))
        if db._lastsql:
            print db._lastsql  # will return result 1st time and when cache expires
            print 'no result'
        return dict(cached=cached)
  • as you’ll notice :
    db._lastsql # will return result 1st time and when cache expires
  • Oh… !!, in general cache anything in ram/disk using something like:

    variable=cache.ram('key',lambda: create_variable(),3600)
    # cache.ram or cache.disk

    How it works?

    If a ‘key’ variable is in the cache.ram and is not older than 3600
    seconds it returns else calls create_variable, stores it in cache and
    returns its value.
    Of course, you can replace “key” with any name you want.

    Record deletion, keepvalues and field representation

    May 2, 2010

    Suppose you’ve 2 SQLFORMs to create/update records in a database table

    def reprs(value):
        result  = A(value, _href=URL(r=request, f='index', args=value))
        return result
    def index():
        if not request.args:
            form = SQLFORM(db.my_table)
            if form.accepts(request.vars, session, keepvalues=keepvalues):
                response.flash  = T('record created')
            form = SQLFORM(db.my_table, request.args(0), deletable=True)
            if form.accepts(request.vars, session):
                response.flash  = T('record created') = reprs
        records = db().select(db.my_table.ALL)
        return dict(form=form, records=records)
  • Now as you can see, we’ve some fun stuff :
    keepvalues when set to True when creating a record then the form will keep the values after successful creation of the new record and vice versa.
    keepvalues=True is very useful in successive insertion of similar data
  • In update forms, keepvalues is always set to True, even if you tried to set to False, it won’t work
    and this makes sense actually, you want to see the updated data after
    updating it.And any way this’s easily can be changed in the code in case you’ve another opinion.
  • The table_name.field_name.represent is cool as you can see, you can make the table you create contains a reference to update the record
    or to refer to another link [This depends on your needs]
  • In my opinion while the represent is a function that takes a string argument [field name], it would have been more useful if it could refer to the record as a whole, in this case you could make a field refer to a an action that may take as argument the value of another field.
    In the current situation you could do something like that by making a query based on the value of the field which is useless unless the field value is unique , so it’s more useful to be used with id fields in general.
    Moreover it could have taken the record, one will not be forced to make an extra query.
    Of course something like this can be done in other ways.
    If you’ve an argue about my point of view, let me know.
  • Another thing to notice is that when deleting a record, form variables should be cleared which is not the case here.
    In my openion this’s not a good idea in fact, and needs to be fixed because it’s really tempting to see the variables still in the form and makes one clicks the [‘submit’] button again to see what would happen
    and it returns ‘object not found’ error message
    One way to overcome this issue is to explicitly check whether user have asked to delete a record or not in the accepts() function.

    if form.accepts(request.vars, session):
        if form.vars.delete_this_record:
            #record is deleted then redirect user to the same page without  
              arguments [without the last argument if page takes many arguments]
            redirect(URL(r=request, f=....))

  • form fields customization, and complex validation

    May 1, 2010

    making a field value readable but not writable

    INPUT(_name="username", _value=session.username, _readonly=ON)), 

    in SQLFORM this’s done by adding :”writable=False” to the field in database table definition :

    db.define_table('city', Field('name', writable=False, default='LO.A'))


    db.define_table('city', Field('name', default='LO.A')) = False

    To customize forms as you widh, you may do the following:

    and in view
    {{for table in form.components:}}
       {{for tr in table.components:}}
          {{for td in tr.components:}}
             {{for item in td.componets:}}

    The customization may be adding a new field in the view
    WARNING: when adding new fields after form is accepted, validation on these fields won’t work at all.

  • Web2py forms can be treated as list of lists, you can create a form and print it (to exactly know its depth) and try to insert fields in it before accepting it.

    Some body can argue about the benefit of this, but it’s really very important sometimes.
    Suppose for example you have a table in an old database that’s used to create forms, but in a certain situation [only one situation] you need another field in the form, and since it’s not a good idea to change your db table by adding a new field in it, you’ll have in the function that creates and returns the special form to insert field before accepting it.
    you may also wish to add a field for a table that is not supposed to have extra fields like db.auth_user may be to determine the category of the user when adding a new user.

    Adding the field from within the view as shown above will work but without the validation, you can freely add a field fom within the view file if validation on it is not important.

    form = SQLFORM(db.auth_user, submit_button=T("Save"))
    # can't put this in the view since form needs to be accepted b4 returning to view
            form[0].insert(-1, TR(LABEL(T(category:')),

    Why adding the following line ?


    Because web2py convention when create forms is to give them the id:
    ‘_no_table_%’field_name and this’s important just in case you want to add more customization on the field using javascript or add some extra style to it using css and we should follow this style on creating our custom fields.

  • How to check for form errors, how to make enforce validation to fail on afield ?

    form = SQLFORM(....)
    if form.accepts(...):
       #option 1
    elif form.errors:
          #option 2

    In option 2 you can do either of the following

    a) rewrite error messages:

        if form.errors.user_name: form.errors.user_name='oops!'

    b) copy error messages in a dictionary :

        import copy

    c) clear errors so that they are not displayed:


    In the :


    This means :

    Don’t display any message, this is important for the form not to display the error flash message when entering the page.
    To make it clear, you’ve 3 situations : form accepted, form not accepted [form.errors] exists and form not accepted and there’s no form.errors [when you first navigate to page containing the form].
    Without this you’ll end up always with a flash message [‘Not accepted’] even you’ve already entered the page and without submitting the form.

    What if you’re doing more complex validation that has to be performed upon submitting data ?

  • We know that any code inside accepts() function is actually executing after accepting data and put it on database (in case of SQLFORM)
    so what to do?
  • It’s easy you need to specify another function to make an extra validation upon submitting the form data

    #In model :
    db.define_table('country', Field('name'))
    #In controller:
    def validate_city(form):
        if != 'LOA':
   = T('only LOA is allowed')
    def index():
       form = SQLFORM(
       if form.accepts(request.vars, session, onvalidation=validate_city):
           response.flash = T('city added')
       return dict(form=form)    

    You can even do :

    form.components[0] is the LABEL field
    form.components[0].components[0] is "Username:"
    form.components[0].components[1] is the INPUT field
    form.components[0].components[1].attributes['requires'] #is the  
    IS_NOT_EMPTY object.
    #Technically you can do:
    but this would work only before accepts .