ADF BC Tuning V: View Objects, Part 3

I’m back from the wilds of December, and to my regular schedule. I had intended to talk about view links this week, but I realized there were two important things about tuning view objects that I forgot to discuss in Part 1 or Part 2: View link consistency and in-memory filtering.

View Link Consistency

Like association consistency, view link consistency affects whether new rows appear in the query collections for accessors (view link accessors this time, rather than association accessors). But it does something more: It affects how new entity rows appear in the query collections of view objects.

Suppose you have two view objects that both query rows from the EMPLOYEES table, and are both (at least in part) based on the Departments entity object. Let’s call them EmpsWithManager and EmpsWithDept, and give them the following queries:

WHERE Employees.MANAGER_ID = Managers.EMPLOYEE_ID (+)

Now, suppose we insert a new employee via an instance of EmpsWithManager (and are careful to assign it to a department), and suppose we have a separate instance of EmpsWithDept. When will this instance show up in EmpsWithDept’s cache?

With view link consistency mode on (the default), it will immediately–as soon as an Employees entity object instance is inserted into the cache, it notifies all view object instances that use Employees, and creates appropriate view rows to go into their caches. Most of the time, this is the behavior you want;  it certainly beats having to post the row and re-execute EmpsWithDept’s query to get it to show up. But, as in the case of associations, you might not need it to show up immediately; perhaps you know you’re not going to access the EmpsWithDept instance before you commit anyway, or perhaps EmpsWithDept is an insert-only VO and you’re not displaying its rows anyway.

In this case, it might make sense to turn view link consistency off–not requiring the entity object instance to notify the view object will save some time and improve performance. You could do this globally by setting a configuration variable, but you usually won’t want to do this globally (since view link consistency is desirable for a large majority of cases)–just for specific view objects.  You can, instead, use code in your view object instance’s create() method, or better yet, in the create() method of a custom framework view object class, like the following:

protected void create() {
    String viewLinkConsistent = getViewDef().getProperty("ViewLinkConsistent");
    if (viewLinkConsistent != null && viewLinkConsistent.equals("true") {

So, the takeaway: While view-link consistency is generally a good thing, consider turning it off if you don’t need it.

In-Memory Filtering

Now, let’s suppose that you have view link consistency on, and when you insert the new row into EmpsWithManager, you don’t assign the employee to a department. You might expect that, since EmpsWithDepartment involves an inner join between Employees and Departments, an employee with a null DepartmentId wouldn’t show up in EmpsWithDepartment’s cache. But by default, it does–the WHERE clause of a view object’s query only gets applied when the query is actually run against the database; not every time the view object is notified of an entity insert.

Or suppose that EmpsWithDepartments has named query criteria, and you turn on said query criteria in your application. Again, by default, you’ll need to re-run the view object’s query before the criteria apply to the data.

There are a number of methods to fix this, which involve some sort of in-memory row filtering. To support view link consistency, you can use Steve Muench’s post on how to override ViewObject.rowQualifies(). For immediate filtering of named view criteria, you can use “In Memory” or “Both [in SQL and in memory]” query execution mode.

Since we’re talking about tuning, I won’t go into detail about how to apply in-memory filtering; Steve explains his technique well, and the doc does a fine job of explaining in-memory filtering using view criteria. Instead, I’m going to talk about when to apply in-memory filtering. Obviously, if you simply don’t care whether inappropriate rows are filtered out of the view object instance’s cache (as is generally the case, for example, with insert-only VOs), then there’s no reason whatsoever to apply in-memory filtering; it’s just a performance drag. However, if you do care, and want inappropriate rows to be filtered out of the cache, you basically have three alternatives:

  1. Apply in-memory row filtering.
  2. Post data and re-execute the VO’s query every time you create a new entity object instance (or, for batch operations, at the end of whatever process creates all the new rows), and every time you turn on a query criterion.
  3. Live with inappropriate data until you’re going to post and requery anyway.

If option 3 above is possible for you, I recommend it–both requerying the database and applying in-memory filters are potentially costly operations, and if you have a separate reason to post and requery data before you need to filter out inappropriate data, it’s certainly wise to just wait for that. If not, we’re left with 1 and 2, each of which has advantages in particular situations.

Option 1 and Option 2 fare differently under different circumstances. To see the difference, let’s look at three possible cases:

Supporting View Link Consistency for One-at-a-Time Insertions

Suppose your users are going to be creating entity object instances (via another view object) once per form submission. Applying in-memory criteria to one row at a time (as Steve does) really isn’t very costly, and hitting the database twice (once to post and once to requery) every time the user hits a Submit button is. So in this case, you pretty much always want to use Option 1, with a method like Steve’s.

Supporting View Link Consistency for Batch Insertions

Now, suppose you have a different app, where, with one click of a Submit button, the user fires a service method that causes 200 entity object instances to be created. This is a pretty rare case, but it could happen, especially when there’s a multi-select control or a shuttle that rapidly creates rows in an intersection table. In this case, you could put the post-and-requery at the end of the service method. So you have a choice between firing your row filtering logic 200 times, or hitting the database twice. Here, it’s a lot less clear which is more efficient (it depends a lot on the performance level of the database and the quality of the app server’s connection to it); you might want to try both and see how they do against load testing.

Applying Named View Criteria

Now, let’s look at the search case: Through a query component or via a programmatic call, your user is going to apply named view criteria (possibly with particular bind variable values) to a view object instance, and you need a filtered set of results immediately.

Of course, if you haven’t executed the query before you apply the view criteria, and you’re not going to remove or change the view criteria later, you might as well go ahead and execute the query. Depending on what you want to do with new rows (whether they need to be filtered too), you can use either “Database” or “Both” query mode for your view object criteria, and just execute the view object’s query after applying the criteria (ADF widgets like query components do this for you).

So real questions comes up if you have already executed the query, and want to weed out rows as the criteria is applied, or if you anticipate changing or removing the criteria or variable values later.

First, let’s look at queries that have already been executed. Do you in-memory filter the extra rows out, or requery for no other purpose than to get filtered rows?

Here, it’s a trade-off between the number of rows that have been or are likely to be retrieved into the view object cache. If, say, the average user will bring 2 JDBC fetches of 15 rows each into the cache, the trade-off is between an extra database query and applying your filter logic to an average of 30 rows. Unless your filter logic is very complex, this definitiely favors filtering in memory–go ahead and use a query type of “In Memory” for your view criteria. If, on the other hand, the user is likely to bring in 500 rows, the trade-off may swing the other way, and you might want to use a query type of “Database” or “Both” (depending on how you want to handle new rows) for your view criteria, and simply re-execute the view object’s query when applying them.

Now, what if you may want to change or remove the criteria or variable values later (but haven’t yet executed the query)? You have an option of using “Database” or “Both” modes, which will cause only the rows that match the criteria to be retrieved from the database, or using “In-Memory” mode, which will retrieve all rows (including those that don’t match the criteria), and simply keep non-matching rows out of the view object instance’s primary query collection.

If you use “Database” or “Both” modes, and later change the query criteria, there won’t be any way, short of re-executing the query, to access any rows that the new criteria admits but the old one does not. Maybe, because many rows are going to be retrieved by the new query criteria, re-executing the query is what you’ll want to do anyway. But maybe only a few rows will by the new query criteria, and you want to be able to show them all as efficiently as possible, which we said above involved using in-memory filtering.

If you anticipate that, you have the option of using “In Memory” query mode for this (as opposed to the later) view criteria, allowing even un-matching rows to be queried, which will allow you to switch criteria without re-executing the query. But there’s a trade-off: Suppose that, while all your view criteria limit the view object’s query result down to just a few rows, applying no view criteria at all will make the view object’s query return, say, 150,000 rows. You really don’t want to be executing 150,000-row queries for each user. Using “Database” or “Both” query mode and simply re-executing when the view criteria change is definitely a better idea.


  • To support view link consistency for single-row insertions, filter in-memory.
  • To support view link consistency for large batch insertions, consider posting and  re-execution rather than in-memory filtering.
  • If your view criteria will be applied before your view object is first queried, you will never need to remove or change the criteria or its variable values, and you don’t need to worry about filtering unposted rows, use “Database” query mode; if you do need to worry about filtering unposted rows, use “Both” mode.
  • If your view criteria will be applied to an already-queried view object instance, use “In Memory” mode for view objects that won’t fetch a lot of rows, and re-execute the query for view objects that will.
  • If your view criteria may need to be removed, use “In Memory” mode for view objects that won’t query a lot of rows even when unqualified, and “Database” or “Both” mode (depending on whether you need to filter unposted rows) for view objects that will query a lot of rows when unqualified.

Phew, that’s a mouthful. But next week we’ll talk about view links, which will be a bit simpler.

2 thoughts on “ADF BC Tuning V: View Objects, Part 3”

Leave a Reply

Your email address will not be published. Required fields are marked *