ADF BC Tuning VIII: Application Modules, Part 2

Last week, I talked about a couple of tuning opportunities for ADF application modules: Lazy loading and shared application module instances. This week and next, I’m going to talk about a way to tune your application modules that is, in my opinion, even more important: Knowing how and when to adjust your application module pool settings. This week, I’m going to write about what application module pooling is; it’s pretty vital that you understand it before you try to tune it. Next week, in the final post in the ADF BC Tuning series, I’ll provide the actual application module pool tuning tips.

Each application module instance manages caches of entity object instances and view rows. These caches can be pretty big: They include all the data so far retrieved from the database, plus any changes the user has made in this transaction. This is something to think about if your application might have even 50 users with simultaneous sessions, let alone if it could have thousands or more. There’s nothing that can kill an application’s performance like filling up the memory and having to thrash around in your disk swap space. Moreover, establishing a connection to the database is a time- and resource-consuming operation. If you create a new connection every time a user session begins can lead to performance and network congestion issues.

Fortunately, ADF takes care of these problems for you, using application module pools, resource managers that determine which caches need to be maintained in memory, which can be temporarily swapped to the database (to a table called PS_TXN, by default), and which can be safely discarded. There is usually one application module pool for each application module definition in your application.

Application module pooling is transparent to the application, and understanding how it works isn’t trivial, so a lot of ADF developers write applications without knowing how it works. But that’s a bad idea, because not understanding how to set the pool’s properties can lead to serious breakdowns of scalability. Not as serious as the aforementioned memory swaps, but still pretty bad.

Clients generally interact with the application module pool in three steps: They “check out” application module instances in order to use them, they “check in” those instances when they’re done with them for a little while, and then they re-check them out if they need them again. Although that’s the chronological order, it’s not the easiest order to explain them in. So I’ll start with the process of checking in.

Checking Application Modules Into the Pool

When a user is temporarily done using an application module instance (that is, after it’s served the response to a particular request–the user’s “think time”), it checks the application module instance into the pool. The HTTP session maintains an application module cookie, which is an object keyed to a session cookie downloaded to the user’s browser. This cookie can be used to request that the same application module instance (with all of its data caches) be checked out later. While the session continues, the application module instance is marked as “managed”; when the user’s session expires, the application module cookie disappears, the instance’s caches are cleared, and the instance is marked as “stateless.” Whether an instance is managed or stateless has an impact on what happens the next time an application tries to check the instance out of the pool.

Checking Application Module Instances out of the Pool for the First Time

When the user first tries to access data, the ADF model layer requests an application module instance from the pool. If there is already a stateless instance in the pool, the pool will return that instance. Otherwise, the pool starts looking at managed instances in the pool. If there are none of them, either, or if their number is below a threshold called the referenced pool size, the pool will create and return a new stateless instance. So far, so simple.

If, however, creating a new instance would exceed the referenced pool size, and there is a managed application module instance in the pool, the pool will passivate one of the managed instances (the one that has seen the greatest period of inactivity), clear out its caches, and return it.

What’s passivation? It’s a process by which the application module instance’s transactional state gets written to the database. This includes new, changed, or deleted persistent data; by default, it also includes row filters and current row pointers; and optionally, it includes values of transient view attributes. I talked about turning those last two options on or off here (scroll down to “Controlling Passivation of View Objects”). That way, the state of the transaction can be re-created later (which is important, as we’ll see next).

Checking Application Module Instances Back out of the Pool

When a particular session requests an application module instance that it has previously released as managed, the pool checks whether it has been passivated, cleared out, and re-used. If not, it returns that instance to the application.

If the application module instance has already been passivated, the application module pool obtains a new application module instance (using the same strategy as for a first check-out request) and activates it, restoring its state from the database table PS_TXN. The activated application module instance is returned to the session, identical to the application module instance that the session released.

Hopefully, now you understand how application module pooling works. Next week, I’ll talk about how to make it work for your application.

Leave a Reply

Your email address will not be published. Required fields are marked *