Extreme Reusability, Part II

Last week, I introduced the ADF development methodology I’m proposing, “Extreme Reusability,” articulated its goals, and discussed the techniques of “Generalize, Push up, and Customize” and “Think Globally, Deploy Locally” that are critical to the methodology. I didn’t, however, describe the actual…well, methodology, meaning the development cycle prescribed by Extreme Reusability.

Notice I didn’t say the application development lifecycle. That’s because developing under Extreme Reusability, like developing under SOA, isn’t primarily about the creation of standalone applications. You should think of the development cycle for extreme reusability as part of an enterprise-wide effort.

Development under Extreme Reusability involves developing along three separate but interacting (and communicating–communication is absolutely vital under this system) tracks: framework development, service development, and application development. These tracks are assigned to different individuals on the team, in (at a guess–remember this is a proposed methodology) somewhere around a 20-60-20 division for a typical organization’s needs.

The Three Tracks

The Framework Development Track

This is the track on which to put your Java developers, because this is the track which (among other things) is responsible for the vast majority–in the theoretical ideal, all–of your Java development. The framework track creates the stuff that is truly enterprise-wide: the stuff that *every* other developer is going to import into their applications. In particular, here’s what people on the framework track do:

  • Create ADF BC custom framework classes.
  • Create custom ADF BC validators (if you do model-level validation) and, if you have developers who are basically competent with Swing, customizers for those validators.
  • Create custom conroller classes (e.g., lifecycle changes, classes for managed beans, and so on).
  • Create custom Faces components (declarative or otherwise)
  • Implement an enterprise-wide LAF using skins.
  • Deploy as a library.
  • Receive and implement enhancement requests from the other tracks.

This is the track that needs to implement the “Generalize, Push Up” part of “Generalize, Push up, and Customize”. When track developers receive an enhancement request (“I need to be able to use this package API instead of DML”), it is their responsibility to figure out how it might be generalized (“I need to be able to use any package API instead of DML”) in such a way that it can be declaratively customized (by setting properties on an entity object definition and its attributes). I gave an outline of how this could be done for the package API case here, but, as I stated last week, the principle applies throughout application layers (with “managed properties” or “custom component attributes” substituting for “business component custom properties”).

The Service Development Track

This is probably where most developers will be working in a typical organization. These people develop “services,” not in the sense of true web services, but in the sense described in the principle “Think Globally, Deploy Locally”. Not every application will use the services these developers create, but the idea is that the individual services may be combined and recombined to create a variety of applications. In particular, here’s what people on the service development track do:

  • Import the framework into all of their applications
  • Create libraries of entity object definitions (generally 1 library/schema)
  • Import these libraries into all of their applications that use the relevant schemas
  • Receive requests for particular services from the application development track
  • Create “data-only” services (i.e., services that require only service methods, not a UI) as libraries containing an application module and required view objects
  • Create “full” services (i.e., particular subtasks that involve a UI) as reusable task flows, including page fragments and business components
  • Request framework enhancements (anything requiring Java coding or a new LAF style) from the framework development track

 When developers in this track receive a request for a custom service, they should first see if it can be composed from other existing services, and do so if possible. Otherwise, they need to consider if the service request can be generalized–made more reusable, allowing declarative customization by passing in parameters. The idea here, just as in SOA, is to develop a collection of services that have good potential for reuse across the enterprise.

The Application Development Track

Note that I’m suggesting that only about 20% of the developers actually need to be developing applications. That’s because applications, on this model, are largely just strings of composed services. The developers on this track need to do the following:

  • Import the framework into every application
  • Identify existing and new required services for the application
  • Request new services from the services track
  • Create an application module to manage the application’s transaction
  • Nest data-only service application modules in the main application module, or create shared instances of them, as appropriate
  • Create a “frame” ADF Faces page in which UI services can run
  • Create an unbounded task flow to string together services
  • Request framework enhancements (any new required Java code) from the framework track
  • Deploy the application as an EAR to a web server.

Note that, in a “pure” case (which may be an unreachable ideal, but the goal is to strive for it), application track developers won’t be developing entity objects, view objects, a real data model (as opposed to just a bunch of nested AM instances), pages (besides the “frame” page, which has few or no databound components), etc. Most of their work, actually, is just in identifying the services their application requires and working with the service development team to make sure the services work to spec.

Source Control

In most ways, this divided methodology actually makes source control easier. The relative granularity of the modules, and the assignment of particular developers to each module, makes merge conflicts far less likely.

On the other hand, precisely because of this granularity, some care needs to be taken to ensure that all developers are using the same versions of the libraries they share, and that changes to a set of libraries do not simply break working applications. This takes some care, but it really isn’t that difficult. Here are some tips:

  • Libraries should be deployed for incorporation to a central, networked location. Where practicable, developers should actually use the networked library in their applications. In some cases (e.g., remote developers over a slower connection with VPN encryption), this may not be practicable, so these developers should make a point to download the libraries each day. (This can, if desired, be done fairly easily with a single ANT script.)
  • Specifications are extremely important. In particular, framework developers need to create interfaces for all of their custom code, and these interfaces should lose methods, or change the function or signature of methods, only with appropriate deprecation time and careful communication with the other tracks (adding methods is rather lower-impact, so if you want to make, e.g., a signature change, just add it and deprecate the other one). Service developers need to maintain documentation to describe exactly what the inputs, outputs, and behavior of their services are, and break these contracts only after careful communication amongst themselves and with application developers.
  • Extreme Reuse makes unit testing relatively easy, because units of code are divided up quite well. You need to take advantage of this; continuous testing against established specs is critical.

Conclusion

So, that’s it, at least for the first draft. I’m curious about your feedback (I’m always curious about your feedback, but I’m especially curious in this case), so please, either post on the ADF Methodology Google group, drop me a line, or share your comments below.

I’m also going to post a version of this to the Oracle Wiki (I’ll edit this to add a link when it’s up), so there will be a living, shared document in addition to this first version.

3 thoughts on “Extreme Reusability, Part II”

  1. hi Avrom

    Since you point it out as extremely important, how do you suggest that service developers need to maintain documentation to describe exactly what the inputs, outputs, and behavior of their services are. For example also for what you call “full” services (i.e., particular subtasks that involve a UI).

    Preventing that these libraries, and their different versions, become a mess, seems quite hard in this Extreme Reusability methodology you propose.

    regards
    Jan Vervecken

  2. Pingback: blogs.oracle.com
  3. Hi Jan –

    So, I want to distinguish two sorts of inputs/outputs. Both are important to document, but they should be documented in different ways.

    One sense of inputs/outputs (which applies only to “full” services) is user inputs/browser outputs, business rules governing them, etc. Obviously, developers using the services need to know about these, but so do the users. I think user documentation of these inputs and outputs should suffice for most purposes.

    The other sense of inputs/outputs that needs to be documented is really only of interest to developer consumers of the service, but it’s rather more limited. It’s just the inputs that the consuming application needs to pass to the service, the outputs it can expect in return, and the changes to the entity caches that the behavior translates into. The inputs and outputs that the application needs to pass/can expect can be documented just as you would write Javadoc (though obviously you can’t take direct advantage of the Javadoc engine to do so).

    JAR files…I agree that versioning is the current weakest point of the methodology. But I don’t think it’s a deal-killer. For one thing, I think that, if you think of the deployed libraries as internal *releases* (in a very fast-cycle, Agile sense), and really use the centralized network instance of the JAR (which will always be the latest release, definitionally) as much as possible, then this sort of versioning won’t be a huge issue.

    But I’m still not 100% sure that this is the best way to manage it. Here’s another possibility:

    Each application, or service that relies on other services, has, in its build script (and I *strongly* recommend using build scripts, at least for deployment for a variety of independent reasons–though that’s a topic for a different post) which actually checks the latest version of the needed applications out of the source control system into a temporary area, compiles and JARs them, and copies them into the WEB-INF/lib area of the app, so that the built project will always have the latest version of all relevant libraries.

    Since the deploy script will call the build script first, this version will also get into the deployed application. (This, obviously, is *not* treating the libraries as having fast production cycles, but rather requires a synch between libraries and apps at each phase: development/testing/production, which is less than perfect; that’s why I prefer the original methodology for this).

    One impact of any of these systems is that, to take advantage of a new release of the service, the applications that use the service will need to be redeployed. I’m looking into the possibility that BC and Task Flow libraries don’t actually need to be deployed with the service but can just be added to the J2EE container’s shared classpath (I don’t yet know whether this is true); in that case a simultaneous deployment (via build script) to both the container shared classpath and the shared network location will assure that the latest release version is being used at both design-time and run-time (requiring only a JDev/container bounce to take effect, respectively).

    Wow. That was a long reply. It was a very good pair of questions.

Leave a Reply

Your email address will not be published. Required fields are marked *