Mike,
I like it very much. If you think about it, the way that session variables are normally kept on a web server, they are stored in a session file. When you access them, they are read into memory. When you update a session variable (write to a ScriptCase global variable) the file is updated. The file lives until session expiration at which time the session file is deleted. How much more overhead can there possibly be if, instead of a file managed by Apache/PHP, the data is stored in a database managed by MySQL? If Apache/PHP and MySQL are running on the same server, many people say that MySQL access of session data is faster than Apache/PHP’s.
So next question would be “how about if the SQL server is on a different machine than the web server”. That is my case. Between servers I have a 100MBit ethernet backchannel, and you absolutely cannot notice any difference in speed, session in database or session in files.
In terms of how long the data lives in the database, the data is deleted upon expiration of the session. That is defined as ‘some configurable period’ without any writes to a session variable. You also can manipulate the session via PHP functions - kill a session, start a new session, etc. But the timeout, say 15 minutes, gets reset every time a session variable (i.e. scriptcase global variable) gets modified. So, if you close your browser without logging out, the session will disappear off the server 15 minutes after the last session variable was written to. Remember, web servers are stateless. They have no idea that you have closed your browser. This is the same as the timeout that you sometimes experience in ScriptCase Development Mode.
In my environment, it is very important to us to have a scalable server farm. If our users go from 2,000 to 20,000 unexpectedly, we need to be able to scale quickly and easily. As a result, MySQL runs on 2 dedicated servers. They are configured for master-master replication so you can access either one and get exactly the same data. Also, either one can die without anything being adversely affected. I run a HA-Proxy front end that balances the load between the two sql servers. I can add more MySQL servers as I may need to, very easily (even in geographically different places). Takes minutes, not hours to bring a new one online. Likewise, I have two web servers, both running identical production copies. Again, the load is balanced to direct queries to the least busy server. I can add more servers whenever I like, and either one can die without affecting anything else. As a matter of fact, I can have either web server die at the same time as either sql server, and the web site is still up - this is called “High Availability”. Because of storing the session data in database, if you are on web server A and it dies suddenly, your next page access will come from web server B and the user will never even know it. If the Web servers were storing the session data in a local file instead of the database, then the session would be lost when the web server died and the user would have to log in again and begin a new session on the other web server.
So yes, I like sessions being stored in database. In all of our testing, there is no noticeable slowdown, and it enables us to do many things that improve the responsiveness, robustness, and metrics of the overall system.
Sorry for the long winded explanation, but we have spent many hundreds of engineer hours analyzing performance & resilience metrics and refining our ideal scalable cluster. I guess I am like a proud papa describing my daughter’s school performance.
Dave