CANWEALLAGREETHATWEWILLAIMFORCLARITYASWENAMECLASSES? CanWeAllAgreeThatWeWillAimForClarityAsWeNameClasses?

I like to be productive and efficient. So anything that unnecessarily wastes even just a few moments of my time… well, it really bugs me.

That is why I am particularly bugged by a particular practice (nay, an anti-practice) that I see too often. FYI, AFAIK most folks would agree with me, so IDK why this anti-practice is so pervasive.

I’m talking about CamelCased class names with capitalized acronyms.

A camel
A camel. See if you can tell how this animal inspired the term “CamelCase”.

You see these fairly often in Java. EOFException. URLEncoder. ISBNValidator. Mercifully, the good folks who designed the java.net package decided to use Http rather than HTTP in class names, so we’re not cursed with the likes of HTTPURLConnection. Not so with Apple, however, as anyone who’s worked with Objective-C, or even Swift, can attest. The two-to-three-uppercase-letter classname prefix standard (for example, NSThis, NSThat, NSTheOther) is bad enough. But when you couple that with Apple’s overzealous penchant for capitalizing acronyms, you wind up reading AFJSONRequestOperations from your NSHTTPURLResponses.

A strange camel
The weird-ass camel that some language designers have apparently once seen.

SMH.

Though they’re rare, I’ve encountered programmers who voraciously favor the all-caps approach. The rationale is usually that it is, simply, proper grammar. And that’s fair enough. But guess what? It’s also proper grammar to put spaces between words, and to lowercase the first letter of non-proper nouns (unless your nouns are German). Yet we programmers happily break those grammatical decrees on a daily basis.

Instead of being GrammarSticklers, we should be crafting class names to convey the classes’ meanings as clearly and quickly as possible. It’s not that these class names are illegible. It’s simply that they are less legible than they should be. It takes an extra beat or two to understand them. And I don’t know about you, but I don’t like wasting beats.

Am I overreacting? IDK, maybe I am. But thanks for bearing with me anyway. I won’t waste any more of your time.

Collections and Encapsulation in Java Never, ever return null when you're supposed to be returning a Collection.

A core tenet of object oriented programming is encapsulation: callers should not have access to the inner workings of a class. This is something that newer languages such as Kotlin, Swift, and Ceylon have solved well with first-class properties.

Java—having been around for quite awhile—does not have the concept of first-class properties. Instead, the JavaBeans spec was introduced as Java’s method of enforcing encapsulation. Writing JavaBeans means that you need to make your class’s fields private, exposing them only via getter and setter methods.

If you’re like me, you’ve often felt when writing JavaBeans that you were writing a bunch of theoretical boilerplate that rarely served any practical purpose. Most of my JavaBeans have consisted of private fields, and their corresponding getters and setters that do nothing more than, well, get and set those private fields. More than once I’ve been tempted to simply make the fields public and dispense with the getter/setter fanfare, at least until a stern warning from the IDE sent me back, tail between my legs, to the JavaBeans standard.

Recently, though, I’ve realized that encapsulation and the JavaBean/getter/setter pattern is quite useful in a common scenario: Collection-type fields. How so? Let’s fabricate a simple class:

public class MyClass {

    private List<String> myStrings;

}

We have a field—a List of Strings—called myStrings, which is encapsulated in MyClass. Now, we need to provide accessor methods:

public class MyClass {

    private List<String> myStrings;

    public void setMyStrings(List<String> s) {
        this.myStrings = s;
    }

    public List<String> getMyStrings() {
        return this.myStrings;
    }

}

Here we have a properly-encapsulated—if not verbose—class. So we’ve done good, right? Hold that thought.

Optional lessons

Consider the Optional class, introduced in Java 8. If you’ve done much work with Optionals, you’ve probably heard the mantra that you should never return null from a method that returns an Optional. Why? Consider the following contrived example:

public class Foo {

    private String bar;

    public Optional<String> getBar() {
        return (bar == null) ? null : Optional.of(bar);
    }

}

Now clients can use the method thusly:

foo.getBar().ifPresent(log::info);

and risk throwing a NullPointerException. Alternatively, they could perform a null check:

if (foo.getBar() != null) {
    foo.getBar().ifPresent(log::info);
}

Of course, doing that defeats the very purpose of Optionals. In fact, it so defeats the purpose of Optionals that it’s become standard practice that any API that returns Optional will never return a null value.

Back to Collections. Much like an Optional contains either none or one, a Collection contains either none or some. And much like Optionals, there should be no reason to return null Collections (except maybe in rare, specialized cases, of which I can’t currently think of any). Simply return an empty (zero-sized) Collection to indicate the lack of any elements.

Sock drawer
The drawer still exists even when all the socks are removed, right?

It’s for this reason that it’s becoming more common to ensure that methods that return Collection types (including arrays) never return null values, the same as methods that return Optional types. Perhaps you or your organization have already adopted this rule in writing new code. If not, you should. After all, would you (or your clients) rather do this?:

boolean isUnique = personDao.getPersonsByName(name).size() == 1;

Or have your code littered with the likes of this? :

List<Person> persons = personDao.getPersonsByName(name);
boolean isUnique = (persons == null) ? false :persons.size() == 1;

So how does this relate to encapsulation?

Keeping Control of our Collections

Back to our MyClass class. As it is, an instance of MyClass could easily return null from the getMyStrings() method; in fact, a fresh instance would do just that. So, to adhere to our new never-return-a-null-Collection guideline, we need to fix that:

public class MyClass {

    private List<String> myStrings = new ArrayList<>();

    public void setMyStrings(List<String> s) {
        this.myStrings = s;
    }

    public List<String> getMyStrings() {
        return this.myStrings;
    }

}

Problem solved? Not exactly. Any client could call aMyClass.setMyStrings(null), in which case we’re back to square one.

At this point, encapsulation sounds like a practical—rather than solely theoretical—concept. Let’s expand the setMyStrings() method:

public void setMyStrings(List<String> s) {
    if (s == null) {
        this.myStrings.clear();
    } else {
        this.myStrings = s;
    }
}

Now, even when null is passed to the setter, myStrings will retain a valid reference (in the example here, we take null to mean that the elements should be cleared out). And of course, calling aMyClass.getMyStrings() = null will have no effect on aMyClass’ underlying myStrings variable. So are we all done?

Er, well, sort of. We could stop here. But really, there’s more we should do.

Consider that we are replacing our private ArrayList with the List passed to us by the caller. This has two problems: first, we no longer know the exact List implementation used by myStrings. In theory, this shouldn’t be a problem, right? Well, consider this:

myClass.setMyStrings(Collections.unmodifiableList("Heh, gotcha!"));

So if we ever update MyClass such that it attempts to modify the contents of myStrings, bad things can start happening at runtime.

The second problem is that the caller retains a reference to our underlying List. So now, that caller can now directly manipulate our List.

What we should be doing is storing the elements passed to us in the ArrayList to which myStrings was initialized. While we’re at it, let’s really embrace encapsulation. We should be hiding the internals of our class from outside callers. The reality is that callers of our classes shouldn’t care whether there’s an underlying List, or Set, or array, or some runtime dynamic code-generation voodoo, that’s storing the Strings that we pass to it. All they should know is that Strings are being stored somehow. So let’s update the setMyStrings() method thusly:

public void setMyStrings(Collection<String> s) {
    this.myStrings.clear(); 
    if (s != null) { 
        this.myStrings.addAll(s); 
    } 
}

This has the effect of ensuring that myStrings ends up with the same elements contained within the input parameter (or is empty if null is passed), while ensuring that the caller doesn’t have a reference to myStrings.

Now that myStrings‘ reference can’t be changed, let’s just make it a constant:

public class MyClass {
    private final List<String> myStrings = new ArrayList<>();
    ...
}

While we’re at it, we shouldn’t be returning our underlying List via our getter. That too would leave the caller with a direct reference to myStrings. To remedy this, recall the “defensive copy” mantra that Effective Java beat into our heads (or, at least, should have):

public List<String> getMyStrings() {
    // depending on what, exactly, we want to return
    return new ArrayList<>(this.myStrings);  
}

At this point, we have a well-encapsulated class that eliminates the need for null-checking whenever its getter is called. We have, however, taken some control away from our clients. Since they no longer have direct access to our underlying List, they can no longer, say, add or remove individual Strings.

No problem. If we can simply add methods like

public void addString(String s) {
    this.myStrings.add(s);
}

and

public void removeString(String s) { 
    this.myStrings.remove(s); 
}

Might our callers need to add multiple Strings at once to a MyClass instance? That’s fine as well:

public void addStrings(Collection<String> c) {
    if (c != null) {
        this.myStrings.addAll(c);
    }
}

And so on…

public void clearStrings() {
    this.myStrings.clear();
}

public void replaceStrings(Collection<String> c) {
    clearStrings();
    addStrings(c); 
}

Collecting our thoughts

Here, then is what our class might ultimately look like:

public class MyClass {

    private final List<String> myStrings = new ArrayList<>();

    public void setMyStrings(Collection<String> s) {
        this.myStrings.clear(); 
        if (s != null) { 
            this.myStrings.addAll(s); 
        } 
    }

    public List<String> getMyStrings() {
        return new ArrayList<>(this.myStrings);
    }

    public void addString(String s) { 
        this.myStrings.add(s); 
    }

    public void removeString(String s) { 
        this.myStrings.remove(s); 
    }

    // And maybe a few more helpful methods...

}

With this, we’ve achieved a class that:

  • is still basically a POJO that conforms to the JavaBean spec
  • fully encapsulates its private member(s)

and most importantly ensures that its method that returns a Collection always does just that–returns a Collection–and never returns null.

When to use Abstract Classes Abstract classes are overused and misused. But they have a few valid uses.

Abstract classes are a core feature of many object-oriented languages, such as Java. Perhaps for that reason, they tend to be overused and misused. Indeed, discussions abound about the overuse of inheritance in OO languages, and inheritance is core to using abstract classes.
In this article we’ll use some examples of patterns and anti-patterns to illustrate when to use abstract methods, and when not to.
While this article presents the topic from a Java perspective, it is also relevant to most other object-oriented languages, even those without the concept of abstract classes. To that end, let’s quickly define abstract classes. If you already know what abstract classes are, feel free to skip the following section.

Defining Abstract Classes

Technically speaking, an abstract class is a class which cannot be directly instantiated. Instead, it is designed to be extended by concrete classes which can be instantiated. Abstract classes can—and typically do—define one or more abstract methods, which themselves do not contain a body. Instead, concrete subclasses are required to implement the abstract methods.
Let’s fabricate a quick example:
public abstract class Base {

    public void doSomething() {
        System.out.println("Doing something...")
    }

    public abstract void doSomethingElse();

}
Note that doSomething()–a non-abstract method–has implemented a body, while doSomethingElse()–an abstract method–has not. You cannot directly instantiate an instance of Base. Try this, and your compiler will complain:
Base b = new Base();
Instead, you need to subclass Base like so:
public class Sub extends Base {

    public abstract void doSomethingElse() {
        System.out.println("Doin' something else!");
    }

}
Note the required implementation of the doSomethingElse() method.
Not all OO languages have the concept of abstract classes. Of course, even in languages without such support, it’s possible to simply define a class whose purpose is to be subclassed, and define either empty methods, or methods that throw exceptions, as the “abstract” methods that subclasses override.

The Swiss Army Controller

Let’s examine a common abuse of abstract classes that I’ve come across frequently. I’ve been guilty of perpetuating it; you probably have, too. While this anti-pattern can appear nearly anywhere in a code base, I tend to see it quite often in Model-View-Controller (MVC) frameworks at the controller layer. For that reason, I’ve come to call it the Swiss Army Controller.
The anti-pattern is simple: A number of subclasses, related only by where they sit in the technology stack, extend from a common abstract base class. This abstract base class contains any number of shared “utility” methods. The subclasses call the utility methods from their own methods.
Swiss army controllers generally come into existence like this:
  1. Developers start building a web application, using an MVC framework such as Jersey.
  2. Since they are using an MVC framework, they back their first user-oriented webpage with an endpoint method inside a UserController class.
  3. The developers create a second webpage, and therefore add a new endpoint to the controller. One developer notices that both endpoints perform the same bit of logic—say, constructing a URL given a set of parameters—and so moves that logic into a separate constructUrl() method within UserController.
  4. The team begins work on product-oriented pages. The developers create a second controller, ProductController, so as to not cram all of the methods into a single class.
  5. The developers recognize that the new controller might also need to use the constructUrl() method. At the same time, they realize hey! those two classes are controllers! and therefore must be naturally related. So they create an abstract BaseController class, move the constructUrl() into it, and add extends BaseController to the class definition of UserController and ProductController.
  6. This process repeats until BaseController has ten subclasses and 75 shared methods.

Now there are a ton of useful methods for the concrete controllers to use, simply by calling them directly. So what’s the problem?
The first problem is one of design. All of those different controllers are in fact unrelated to each other. They may live at the same layer of our stack, and may perform a similar technical role, but as far as our application is concerned, they serve different purposes. Yet we’ve now locked them to a fairly arbitrary object hierarchy.
The second is more practical. You’ll realize it the first time you need to use one of the 75 shared methods from somewhere other than a controller, and you find yourself instantiating a controller class to do so.
String url = new UserController().constructUrl(key, value);
You’ll have created a trove of useful methods which now require a controller instance to access. Your first thought might be something like, hey, I can just make the method static in the controller, and use it like so:
String url = UserController.constructUrl(key, value);
That’s not much better, and actually, a little worse. Even if you’re not instantiating the controller, you’ve still tied the controller to your other classes. What if you need to use the method in your DAO layer? Your DAO layer should know nothing about your controllers. Worse, in introducing a bunch of static methods, you’ve made testing and mocking much more difficult.
It’s important to emphasize the interaction flow here. In this example, a call is made directly to one of the concrete subclasses’ methods. Then, at some point, this method calls in to one or more of the utility methods in the abstract base class.
In fact, in this example there was never a need for an abstract base controller class. Each shared method should have either been moved to its appropriate service-layer classes (if it takes care of business logic) or to a utility classes (if it provides general, supplementary functionality). Of course, as mentioned above, the utility classes should still be instantiable, and not simply filled with static methods.
Now there is a set of utility methods that is truly reusable by any class that might need them. Furthermore, we can break those methods into related groups. The above diagram depicts a class called UrlUtility which might contain only methods related to creating and parsing URLs. We might also create a class with methods related to string manipulation, another with methods related to our application’s current authenticated user, etc.
Note also that this approach also fits nicely with the composition over inheritance principal.
Inheritance and abstract classes are a powerful construct. As such, numerous examples abound of its misuse, the Swiss Army Controller being a common example. In fact, I’ve found that most typical uses of abstract classes can be considered anti-patterns, and that there are few good uses of abstract classes.

The Template Method

With that said let’s then look at one of the best uses, described by the template method design pattern. I’ve found the template method pattern to be one of the lesser known–but more useful–of the design patterns out there.
You can read about how the patterns works in numerous places. It was originally described in the Gang of Four Design Patterns book; many descriptions can now be found online. Let’s see how it relates to abstract classes, and how it can be applied in the real world.
For consistency, I’ll describe another scenario that uses MVC controllers. In our example, we have an application for which there exist a few different types of users (for now, we’ll define two: employee and admin). When creating a new user of either type, there are minor differences depending on which type of user we are creating. For example, assigning roles needs to be handled differently. Other than that, the process is the same. Furthermore, while we don’t expect an explosion of new user types, we will from time to time be asked to support a new type of user.
In this case, we would want to start with an abstract base class for our controllers. Since the overall process of creating a new user is the same regardless of user type, we can define that process once in our base class. Any details that differ will be relegated to abstract methods that the concrete subclasses will implement:
public abstract class BaseUserController {

    // ... variables, other methods, etc

    @POST
    @Path("/user")
    public UserDto createUser(UserInfo userInfo) {
        UserDto u = userMapper.map(userInfo);
        u.setCreatedDate(Instant.now());
        u.setValidationCode(validationUtil.generatedCode());
        setRoles(u);  // to be implemented in our subclasses
        userDao.save(u);
        mailerUtil.sendInitialEmail(u);
        return u;
    }

    protected abstract void setRoles(UserDto u);

}
Then we need simply to extend BaseUserController once for each user type:
@Path("employee")
public class EmployeeUserController extends BaseUserController {

    protected void setRoles(UserDto u) {
        u.addRole(Role.employee);
    }

}
@Path("admin")
public class AdminUserController extends BaseUserController {

    protected void setRoles(UserDto u) {
        u.addRole(Role.admin);
        if (u.hasSuperUserAccess()) {
            u.addRole(Role.superUser);
        } 
    }

}
Any time we need to support a new user type, we simply create a new subclass of BaseUserController and implement the setRoles() method appropriately.
Let’s contrast the interaction here with the interaction we saw with the swiss army controller.
Using the template method approach, we see that the caller (in this case, the MVC framework itself–responding to a web request–is the caller) invokes a method in the abstract base class, rather than the concrete subclass. This is made clear in the fact that we have made the setRoles() method, which is implemented in the subclasses, protected. In other words, the bulk of the work is defined once, in the abstract base class. Only for the parts of that work that need to be specialized do we create a concrete implementation.

A Rule of Thumb

I like to boil software engineering patterns to simple rules of thumb. While every rule has its exceptions, I find that it’s helpful to be able to quickly gauge whether I’m moving in the right direction with a particular design.
It turns out that there’s good rule of thumb when considering the use of an abstract class. Ask yourself, Will callers of your classes be invoking methods that are implemented in your abstract base class, or methods implemented in your concrete subclasses?
  • If it’s the former–you are intending to expose only methods implemented in your abstract class–odds are that you’ve created a good, maintainable set of classes.
  • If it’s the latter–callers will invoke methods implemented in your subclasses, which in turn will call methods in the abstract class–there’s a good chance that an unmaintainable anti-patten is forming.

Collection functions

A common use of functional-style programming is applying transformative functions to collections. For example, I might have a List of items, and I want to transform each item a certain way. There are a number of such functions that are commonly found, in one form or another, in languages that allow functional programming. For example, Seq in Scala provides a map() function, while a method of the same name can be found in Java’s Stream class.

Keeping these functions/methods straight can be difficult when starting out, so I thought I’d list out some of the common ones, along with a quick description of their purpose. I’ll use Scala’s implementation primarily, but will try to point out where Java differs. Hopefully the result will be useful for users of other languages as well.

flatten()

Purpose: Takes a list of lists/sequences, and puts each element in each list/sequence into a single list
Result: A single list consisting of all elements
Example:
scala> val letters = List( List(“a”,”e”,”i”,”o”,”u”), List(“b”,”d”,”f”,”h”,”k”,”l”,”t”), List(“c”,”m”,”n”,”r”,”s”,”v”,”w”,”x”,”z”), List(“g”,”j”,”p”,”q”,”y”) )
letters: List[List[String]] = List(List(a, e, i, o, u), List(b, d, f, h, k, l, t), List(c, m, n, r, s, v, w, x, z), List(g, j, p, q, y))
scala> letters.flatten
res19: List[String] = List(a, e, i, o, u, b, d, f, h, k, l, t, c, m, n, r, s, v, w, x, z, g, j, p, q, y)
scala> letters.flatten.sorted
res20: List[String] = List(a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q, r, s, t, u, v, w, x, y, z)

map()

Purpose: Applies a function that transforms each element in a list.
Result: Returns in a list (or stream) consisting of the transformed elements. The resulting elements can be of a different type than the original elements.
Example:
scala> val nums = List(1, 2, 3, 4, 5)
scala> val newNums = nums.map(n => n * 2)
newNums: List[Int] = List(2, 4, 6, 8, 10)
scala> val newStrs = nums.map(n => s”number $n”)
newStrs: List[String] = List(number 1, number 2, number 3, number 4, number 5)

Scala note: Often when working with Futures, you’ll see map() applied to a Future. In this case, the map() method–when provided with a mapping function for the value returned by the Future, returns a new Future that runs once the first Future completes.

flatMap()

Purpose: Takes a sequence/list (A) of smaller sequences/lists (B), applies the provided function to each of those smaller sequences (B), and places the result of each into a single list to be returned. A combination of map and flatten.
Result: Returns a single list (or stream) containing all of the results of applying the provided function to (B).
Example:
scala> val numGroups = List( List(1,2,3), List(11,22,33) )
numGroups: List[List[Int]] = List(List(1, 2, 3), List(11, 22, 33))
scala> numGroups.flatMap( n => n.filter(_ % 2 == 0) )
res8: List[Int] = List(2, 22)

fold()

Purpose: Starts with a single T value (call it v), then takes a List of T and compares each T element to v, creating a new value of v on each iteration. The order of iteration is non-deterministic. Note: This is very similar to the reduce() method in Java streams).
Result: A single T value (which would be the final value of v as described above)
Example:
scala> val nums = List(1,2,3,4,5,6,7,8,9)
nums: List[Int] = List(1, 2, 3, 4, 5, 6, 7, 8, 9)
scala> nums.fold(0) ((a,b) => if (a % 2 == 0) a else b)
res25: Int = 0

foldLeft()

Purpose: Like fold(), always iterating from the left to the right.
Result: A single T value
Example:
scala> val nums = List(1,2,3,4,5,6,7,8,9)
nums: List[Int] = List(1, 2, 3, 4, 5, 6, 7, 8, 9)
scala> nums.foldLeft(-1) ((a,b) => if (a % 2 == 0) a else b)
res27: Int = 2

foldRight()

Purpose: Like fold(), always iterating from the right to the left.
Result: A single T value
Example:
scala> val nums = List(1,2,3,4,5,6,7,8,9)
nums: List[Int] = List(1, 2, 3, 4, 5, 6, 7, 8, 9)
scala> nums.foldRight(-1) ((a,b) => if (b % 2 == 0) b else a)
res28: Int = 8

Business Logic for Scala/Play apps

As I’d mentioned previously, I’m a fairly seasoned Java developer who is making a foray into Scala and its ecosystem (including the Play framework, as well as Akka).

One thing that’s struck me about Play is that there doesn’t seem to be a prescribed pattern for how to handle business logic. This is in contrast to a typical Java EE application (or Spring-based Java web application). With the latter, you’d typically find applications divided into the following:

  • MVC Layer (or Presentation Layer) – This layer would typically contain:
    • UI Models: POJOs designed to transfer values from the controllers to the UI templates
    • Views: files containing markup mixed with UI models, to present data to the user. These would typically by JSPs, or files of some other tempting language such as Thymeleaf, Handlebars, Velocity, etc. They might also be JSON or XML responses in the case of single-page applications
    • Controllers: Classes designed to map requests to business logic classes, and transforming the results into UI Models.
  • Business Layer – This layer would typically contain:
    • Managers and/or Facades: somewhat coarse-grained classes that encapsulate business logic for a given domain; for example, UserFacade, AccountFacade, etc. These classes are typically stateless singletons.
    • Potentially Business Models: POJOs that describe entities from the business’ standpoint.
  • Data Access Layer – This layer would typically contain:
    • DAOs (Data Access Objects): Fine-grained, singleton classes that encapsulate the logic needed to for CRUD operations on database entities
    • Database Entities: POJOs that map to database tables (or Mongo collections, etc, depending on the data source).
By contrast, Play applications seem to typically have only the MVC Layer. Occasionally I’ll see examples with a utils/ folder, but I’ve yet to see any examples with what I would consider an entirely separate business layer or data access layer.
Clearly I could Business and Data Access Layers for my Play application if I wanted to. But part of learning a new framework is not just learning the mechanics, but also the spirit of the framework. So here are a few thoughts I’ve had on the subject:

Rely on Models’ Companion Objects

Scala has the concept of companion objects. A companion object is essentially a singleton instance of a class. For example, I might have a model called User, which looks something like this:
case class User(id: Long, firstName: String, lastName: String email: String)
I would typically create, in the same User.scala file, a companion object like so:
object User {
  def findByEmail(email: String): User = {
    // query the database and return a User
  }
}

As shown above, it’s common for companion objects to contain CRUD operations. So one thought is that we can combine business logic and data access methods in a model’s companion object, treating the companion object as a sort of hybrid manager/DAO.

There are of course a few downsides to this approach. First is that of separation of concerns. If we’re imbuing companion objects with the ability to perform CRUD operations and business logic, then we’ll wind up with large, hard to read companion objects that have multiple responsibilities.

The other downside is that business operations within a companion object would be too fine-grained. Often, business logic spans multiple entities. Trying to choose which entity’s companion object should contain a specific business rule can become cumbersome. For example, say I want to update a phone number. Surely, that functionality belongs in a PhoneNumber object. But… what if my business stipulates that a phone number can only contain an area code that corresponds to its owner’s mailing address? Suddenly, my PhoneNumber object must communicate with a User and MailingAddress object.

Use Middleware for Business Logic

As a Java engineer, I’m using to my business logic being encapsulated within stateless Spring beans. These beans exist in the Spring container (i.e. application context). They are injected into, for example, controllers, who in turn invoke methods on the bean to cause some business operation to concern.

Play ships with Akka our of the box. So I wonder… would a framework like Akka suffice? Presumably I can create an actor hierarchy for each business operation, thus keeping the operations centralized and well-defined. I’m just delving into Akka, so I’m not sure how viable of a solution that would be. My sense is that, at best, I’d be misusing Akka somewhat with this approach. Moreover, I suspect I’m trying to shoehorn a Spring-application paradigm into a Play application.

Let Aggregate Roots Define Business Logic

I’ve coincidentally been reading a lot Martin Fowler‘s blog posts. One idea of his that seems to be picking up traction is that anemic entities–those who are little more than getters and setters–are an anti-pattern. Couple this with the concept of aggregates and aggregate roots presented in Eric Evans’ Domain Driven Design, and I think I might be on the best solution.

The basic premise is that, unlike the layered architecture I described above, with Manager/Facade classes, the domain entities themselves should perform their own business logic. Furthermore, entities should be arranged into aggregates. An aggregate is essentially a group of domain entities that logically form one unit. Furthermore, one of these entities should be the root, to which all other entities are attached. Modifications should only be done through that root.

In my example above about Users, PhoneNumbers and MailingAddresses, those entities would be arranged as an aggregate. User would be the root entity; after all, PhoneNumbers and MailingAddresses don’t simply exist on their own, but rather are associated with a person (or organization). To modify a User’s phone number, I would go through the User rather than directly modifying the PhoneNumber. For example, something like this:

var user: User = Users.findById(123)
user.phoneNumber = “415-555-1212”

rather than this:

var pn: PhoneNumber = PhoneNumbers.findById(789)
ph.value = “415-555-1212

Using the former approach, my User instance can ensure data integrity; e.g. ensuring that the provided area code is valid.

This, then, may be the best option:

  1. Companion objects handle CRUD data access operations
  2. Entities themselves–organized into aggregates–handle their own business rules
  3. No separate business logic stereotype is called out

Scaling Scala – part 1

It’s time to explore Scala. I still enjoy Java programming, and that’s still what I do for a living. But I have to admit that Scala is intriguing. Plus, it’s good to learn new languages every so often. And as they say, Java is more than a language; it’s also a platform and an ecosystem, one that Scala fits very well into.

I’ve gone through different books and tutorials, but the best way to learn a language is to come up with a task and figure out how to do it. I’ve decided that while I’m learning Scala, I’ll also learn the Play! framework. My task will be a microservice whose purpose is to authenticate users. Although my current employer doesn’t use Scala (at least not directly, although we do use Kafka, Akka, and other technologies written in Scala) my plan is to write a service that could be used by the company. That way I won’t be tempted to cut corners.

As I develop this project, I’ll post the solutions to any issues I encounter along the way. I figure there are probably enough Scala noobs out there who might encounter the same problems. I also figure that some of these issues might be very elementary to developers who are more experienced with Play! and Scala. In that vein, any corrections or suggestions are most welcome!

Maven Repositories (and the Oracle driver)

I started a few days ago, and hit two issues rather quickly. The first stems from my company’s use of Oracle as its RDBMS; that’s where we store user credentials. So of course I would need to read from Oracle in order to authenticate users.

As I understand it, Play! makes use of SBT (the Simple Build Tool), which is developed by Typesafe, and is used to manage Play! projects (and other Scala-based frameworks). SBT is analogous to Maven for pure Java projects. In fact, SBT makes use of Ivy for dependency management; Ivy, in turn, makes use of Maven repositories.

So I figured I’d need to pull the Oracle JDBC driver from Maven Central. Play! projects are created with a build.sbt file in the project’s root directory, and that’s where dependencies are listed. We use ojdbc6 (the Oracle JDBC driver for Java 6), so our POM entry looks like this:

<dependency>
    <groupId>com.oracle</groupId>
    <artifactId>ojdbc6</artifactId>
    <version>11.2.0.3</version>
</dependency>

To add that to build.sbt, it would look like this:

libraryDependencies += "com.oracle" %% "ojdbc6" % "11.2.0.3"

I added that to build.sbt, and was confronted with errors stating that that dependency couldn’t be resolved. Turns out that due to licensing reasons, Oracle does not allow its JDBC driver in any public repos. So the solution is to simply download the JAR file from Oracle, create a lib/ directory in the Play! project’s root, and add the JAR there.

Admittedly, that was as much an issue of the ojdbc6 driver than with Play! itself, but I thought it worth documenting here.

Artifactory (or Nexus, if you prefer)

Next up was the issue of artifacts that my company produces. Much of our code is encapsulated in common JAR files and, of course, hosted in our own internal (Artifactory) repository. Specifically, the domain class that I would be pulling from the database contains, among other fields, and enum called Type (yes, I know… that name was not my doing!) which was located in a commons module. I could’ve created a Scala Enumeration for my project, or just skipped the field, but I wanted to demonstrate the interoperability between new Scala projects and our existing Java code.

So I’d have to point SBT to the correct location to find the commons module. I found bits and pieces online on how to do it; here’s the solution that I ultimately pieced together:

(1) SBT had already create a ~/.sbt/0.13/ directory for me. In there, I created a plugins/ subdirectory and with there a credentials.sbt file with these contents:

credentials += Credentials(Path.userHome / “.ivy2” / “.credentials”)

(2) Within the existing ~/.ivy2/ directory, created a .credentials file with these contents:

realm=[Artifactory Realm]
host=company.domain.com
user=[username]
password=[password]

(3) Add the repository location in ~/.sbt.repositories like so:

my-maven-proxy-releases: https://artifactory.tlcinternal.com/artifactory/LC_CENTRAL

(4) Added the following line in ~/.sbtconfig to tell SBT where to find the credentials:

export SBT_CREDENTIALS=”$HOME/.ivy2/.credentials”

I’m not sure why we need both step 1 and 4, but both seem to be required.

Once I restarted my Play! application (this was one case where hot-deployment didn’t seem to work) I was able to use the commons module in my Play! app.

Spring Data, Mongo, and Lazy Mappers

In a previous post, I mentioned two things that every developer should do when using Spring Data to access a MongoDB datastore. Specifically, you should be sure to annotate all of your persistent entities with @Document(collection=”<custom-collection-name>”) and @TypeAlias(“<custom-type>”). This decouples your Mongo document data from your specific Java class names (which Spring Data will otherwise closely couple by default) making things like refactoring possible.

With my particular application, however, I ran into an additional problem. Let me recap. My application is a drawing application of sorts. Drawings are are modeled by, well, a Drawing class. A Drawing can contain multiple instances of Page, and within each Page, multiple Shape objects. Shape, in turn, is an abstract class, containing a number of subclasses (Circle, Star, Rectangle, etc).

For our purposes, let’s focus on the relationship between a Page and its Shapes. Here’s a snippet from the Page class:

@Document(collection=”page”)
@TypeAlias(“myapp.page”)
public class Page extends BaseDocument {

    @Id 
    private String id;

    @Indexed
    private String drawingId;

    private List<Shape> shapes = new ArrayList<Shape>();

    // ….


}

First not that I’ve annotated this class so that I have control over the name of the collection that stores Page documents (in this case, “page”), and so that Spring Data will store an alias to the Page class (in this case, “my app.page”) along with the persisted Page documents, rather than storing the fully-qualified class name.

Also of importance here is that the Page class knows nothing about any specific Shape subclasses. This is important from an OO perspective, of course; I should be able to add any number of Shapes to my app’s ecosystem, and the Page class should continue to work with no modifications.

Now let’s look at my Shape class:

public abstract class Shape extends BaseDocument {

 

    @Id
    private String id;

 

    @Indexed
    private String pageId;

 

    // attributes
    private int x;

    private int y;

    // …

}

Nothing surprising here. Note that Shape has none of the SpringData annotation; that’s because no concrete instance of Shape will be persisted along with any Pages. It is abstract, after all. Instead, a Page will contain instances of Shape subclasses. Let’s take a look at one such subclass:

@Document(collection=”shape”)
@TypeAlias(“myapp.shape.star”)

public class Star extends Shape {

    private int numPoints;
    private float innerRadius;

    private float outerRadius;

}

The @Document(collection=”shape”) annotation is currently unused, because per my app design, any Shape subclass instance will always be stored as a nested collection within a Page. But it would certainly be possible to store different shapes directly into a specific collection.

The @TypeAlias annotation, however, is very important. The purpose of that one is to tell Spring Data how to map the different Shapes that it finds within a Page back into the appropriate class. After all, if a Page containing a nine-point star is persisted, then when it’s read back in, that star had better be mapped back into a Star class, not a simple Shape class. After all, Shape itself knows nothing about number of points!

Feeling pretty happy with myself, I tried out my code. Upon trying to read my drawings back in, I began getting errors of this type:

org.springframework.data.mapping.model.MappingInstantiationException: Could not instantiate bean class [com.myapp.documents.Shape]: Is it an abstract class?; nested exception is java.lang.InstantiationException

Indeed, Shape is an abstract class, and so indeed, it cannot be directly instantiated. But why was Spring Data trying to directly instantiate a Shape? I played around, tweaked a few things, but nothing fundamentally changed. I checked StackOverflow and the Spring forums. Nothing. So it was time to dig into the documentation.

As with most typical Spring Data/Mongo apps, mine was configured to use a bean of type org.springframework.data.mongodb.core.convert.DefaultMongoTypeMapper to map persistence documents to and from Java classes:

     <bean id=”mongoTypeMapper” class=”org.springframework.data.mongodb.core.convert.DefaultMongoTypeMapper“>
        <constructor-arg name=“typeKey” value=“_alias”></constructor-arg>

    </bean>

    <bean id=”mappingMongoConverter”
class=”org.springframework.data.mongodb.core.convert.MappingMongoConverter”>
        <constructor-arg ref=“mongoDbFactory” />
        <constructor-arg ref=“mappingContext” />
        <property name=“typeMapper” ref=mongoTypeMapper/>
    </bean>

    <bean id=”mongoTemplate” class=”org.springframework.data.mongodb.core.MongoTemplate”>
        <constructor-arg ref=“mongoDbFactory” />
        <constructor-arg ref=“mappingMongoConverter” />

    </bean>

The docs indicated that DefaultMongoTypeMapper was responsible for reading and writing the type information stored with persistent data. By default, this would be a _class property pointing to com.myapp.documents.Star; with my customizations it became an _alias property pointing to may app.shape.star. But if DefaultMongoTypeMapper wouldn’t do the trick, perhaps I needed another mapper.

Looking through the documentation, I found org.springframework.data.convert.MappingContextTypeInformationMapper. Here’s what its Javadocs indicated:

TypeInformationMapper implementation that can be either set up using a MappingContext or manually set up Map of String aliases to types. If a MappingContext is used the Map will be build inspecting the PersistentEntity instances for type alias information.

That seemed promising. If I could replace my DefaultMongoTypeMapper with a MappingContextTypeInformationMapper that could scan my persistent entities and build a type-to-alias mapping, then that should solve my problem. The docs also said something about manually creating a Map, but a) It wasn’t readily apparent how to create a Map myself, and b) I didn’t like that approach; I didn’t want to have to manually configure an entry for any new Shape that might be created.

One problem. You’ll notice above that my DefaultMongoTypeMapper is wired into my MappingMongoConverter by way of the latter’s typeMapper property. In fact, typeMapper is itself of type MongoTypeMapper. While DefaultMongoTypeMapper implements MongoTypeMapper,  MappingMongoConverter does not. Fortunately, DefaultMongoTypeMapper allows you to chain together fallback mappers by way of an internal property, mappers, which itself is a List<? extends TypeInformationMapper>. And as luck would have it, MappingMongoConverter implements TypeInformationMapper.

So I would keep my DefaultMongoTypeMapper, and add a MappingMongoConverter to its mappers list. I modified my spring XML config like so:

  <bean id=”mongoTypeMapper” class=”org.springframework.data.mongodb.core.convert.DefaultMongoTypeMapper”>
<constructor-arg name=”typeKey” value=”_alias”></constructor-arg>
    <constructor-arg name=”mappers“>
        <list>
            <ref bean=mappingContextTypeMapper />
        </list>
    </constructor-arg> 
</bean>

  <bean id=”mappingContextTypeMapper” class=”org.springframework.data.convert.MappingContextTypeInformationMapper“>
      <constructor-arg ref=”mappingContext” />

  </bean>

I redeployed and ran my app.

And I ran into the same exact error. Damn.

At this point, I became concerned that maybe all of the TypeAlias information was completely ignored by SpringData with nested documents, such as my Shapes nested within Pages. So I decided to roll up my sleeves, fire up my debugger, and start getting intimate with the Spring Data source code.

After a bit of debugging, I learned that Spring Data was indeed attempting to determine if any TypeAlias information applied to the Shapes that were being read in for any Page. But in a lazy, half-hearted way.

When I say lazy, I mean that there was absolutely no scanning of entities to search for @TypeAlias annotation like I’d assumed there would be. Everything was done at runtime, as new data types were discovered. The MappingMongoConverter would read my base entity; i.e. a Page document. It would then discover that this document had a collection of things called shapes. Then it would examine the Page class to find the shapes property, and discover that shapes was of type List<Shape>. And finally it would examine the Shape class to determine if it had any TypeAlias data that it could cache for later.

In other words, it was completely backwards from what I needed. This mapper wouldn’t work for me, either.

By this time, I’d developed enough understanding as to what was going on, that creating my own mapper didn’t seem too tough. And that’s what I did. Really, all I needed was a mapper that I could configure to scan one or more packages to discover persistent entities with TypeAlias information, and cache that information for later use.

My class was called EntityScanningTypeInformationMapper, and its full source code is a the end of this post. But the relevant parts are:

  • Its constructor takes a List<String> of packages to scan.
  • It has an init() method that scans the provided packages
  • Scanning a package entails using reflection to read in the information for each class in the package, determining if it is annotated with @TypeAlias, and if so, mapping the alias to the class.

My Spring XML config was modified thusly:

  <bean id=”mongoTypeMapper” class=”org.springframework.data.mongodb.core.convert.DefaultMongoTypeMapper”>
<constructor-arg name=”typeKey” value=”_alias”></constructor-arg>
    <constructor-arg name=”mappers”>
        <list>
            <ref bean=”entityScanningTypeMapper” />
        </list>
    </constructor-arg> 
</bean>

  <bean id=”entityScanningTypeMapper” class=”com.myapp.utils.EntityScanningTypeInformationMapper” init-method=”init”>
    <constructor-arg name=”scanPackages”>
        <list>
            <value>com.myapp.documents.shapes</value>
        </list>
    </constructor-arg> 

  </bean>

I redeployed and retested, and it ran like a champ.

So my lesson is that Spring Data, out of the box, doesn’t seem to work well with polymorphism, which is a shame given the schema-less nature of NoSQL data stores like MongoDB. But it doesn’t take too much effort to write your own mapper to compensate.

Oh, and here’s the EntityScanningTypeInformationMapper source:

public class EntityScanningTypeInformationMapper implements TypeInformationMapper {
    private Logger log = Logger.getLogger(this.getClass());
    private final List<String> scanPackages;
    private Map<String, Class<? extends Object>> aliasToClass;
    public EntityScanningTypeInformationMapper(List<String> scanPackages) {
        this.scanPackages = scanPackages;
    }
    public void init() {
       this.scan(scanPackages);
    }
    private void scan(List<String> scanPackages) {
        aliasToClass = new HashMap<>();
        for (String pkg : scanPackages) {
            try {
                findMyTypes(pkg);
            } catch (ClassNotFoundException | IOException e) {
                log.error(“Error scanning package “ + pkg, e);
            }
        }
    }
    private void findMyTypes(String basePackage) throws ClassNotFoundException, IOException {
        ResourcePatternResolver resourcePatternResolver = new PathMatchingResourcePatternResolver();
        MetadataReaderFactory metadataReaderFactory = new CachingMetadataReaderFactory(resourcePatternResolver);
        String packageSearchPath = ResourcePatternResolver.CLASSPATH_ALL_URL_PREFIX +
                                   resolveBasePackage(basePackage) + “/” + “**/*.class”;
        Resource[] resources = resourcePatternResolver.getResources(packageSearchPath);
        for (Resource resource : resources) {
            if (resource.isReadable()) {
                MetadataReader metadataReader = metadataReaderFactory.getMetadataReader(resource);
                Class<? extends Object> c = Class.forName(metadataReader.getClassMetadata().getClassName());
                log.debug(“Scanning package “ + basePackage + ” and found class “ + c);
                if (c.isAnnotationPresent(TypeAlias.class)) {
                    TypeAlias typeAliasAnnot = c.getAnnotation(TypeAlias.class);
                    String alias = typeAliasAnnot.value();
                    log.debug(“And it has a TypeAlias “ + alias);
                    aliasToClass.put(alias, c);
                }
            }
        }
    }
    private String resolveBasePackage(String basePackage) {
        return ClassUtils.convertClassNameToResourcePath(SystemPropertyUtils.resolvePlaceholders(basePackage));
    }
    @Override
    public TypeInformation<?> resolveTypeFrom(Object alias) {
        if (aliasToClass == null) {
            scan(scanPackages);
        }
        if (alias instanceof String) {
            Class<? extends Object> clazz = aliasToClass.get( (String)alias );
            if (clazz != null) {
                return ClassTypeInformation.from(clazz);
            }
        }
        return null;
    }
    @Override
    public Object createAliasFor(TypeInformation<?> type) {
        log.debug(“EntityScanningTypeInformationMapper asked to create alias for type: “ + type);
        return null;
    }

 

}

Easily Manage Properties in your Java / Spring application

Question to the Java developers out there: properties are easy, right? All you need to do is to type up some key/value pairs in a simple file, read that file in when your application starts up (Java makes that a no-brainer), and refer to the appropriate property in your application code. If you’re using Spring, it’s even easier; just configure your beans in your app context XML file.

Well, that’s fine for a simple application that you’re deploying to your local test server. But for a production ready app, things are a little more complication. There are typically two things you’ll need to be able to deal with:

  1. You’ll like have different servers–test, staging, production, etc–each with their own different properties. For example, your test server will likely have different database credentials and connection URLs than your production server.
  2. You won’t want to store all of your properties in plaintext.
What we need is a way to separate the properties that differ across environments. Furthermore, we want to be able to encrypt some of these properties in case our application falls into the wrong hands.

Separate properties by environment

Let’s look at how we’d typically set properties in a Spring web application. For our purposes here, we’ll focus on setting our datasource’s properties: username, password, url, and the JDBC driver class name.
These would typically be set in your spring config file like so:
<bean id=“dataSource”
class=“org.springframework.jdbc.datasource.DriverManagerDataSource”>
<property name=“driverClassName” value=“com.mysql.jdbc.Driver” />
<property name=“url” value=“jdbc:mysql://127.0.0.1:3366/my_db” />
<property name=“username” value=“joe” />
<property name=“password” value=“foobarbaz” />
</bean>
Of course, any of these properties might change as you deploy to a different environment. Your production database might be running on its own server, for example. And you certainly wouldn’t want to use the same credentials in production as you do on your test server… right?
So our first step is to externalize these properties into a separate file. Let’s create a file called db.properties. This file should go somewhere in our classpath; for example, in com/mycompany/config in your compile target directory path. This file will contain our data source values, like so:
db.driver=com.mysql.jdbc.Driver
db.url=jdbc:mysql://127.0.0.1:3366/my_db
db.username=joe
db.password= foobarbaz
Now, how can we use this properties file? For this, Spring provides a handy class, org.springframework.beans.factory.config.PropertyPlaceholderConfigurer. This class takes a list of property file locations. It will read all of the files in the list, and store their properties to use as placeholder replacements.
To use a PropertyPlaceholderConfigurer, simply declare it as a bean:
<bean id=“propertyPlaceholderConfigurer”
class=org.springframework.beans.factory.config.PropertyPlaceholderConfigurer>
<property name=“locations”>
<list>
<value>classpath:com/mycompany/config/db.properties
</value>
</list>
</property>
</bean>
Now we can replace are hard-coded properties in our spring config file with these placeholders:
<bean id=“dataSource”
class=“org.springframework.jdbc.datasource.DriverManagerDataSource”>
<property name=“driverClassName” value=${db.driver}/>
<property name=“url” value=${db.url}/>
<property name=“username” value=${db.username}/>
<property name=“password” value=${db.password}/>
</bean>
At runtime, Spring will replace the ${…} tokens with their property values.
That works great, but what we really want is separate config files, one per environment. To do this, we copy our db.properties file to db-dev.properties and db-prod.properties (this assumes that we have two environments, development and production; of course, make as many copies as you have distinct environments.) Those files should of course continue to reside on our classpath. And of course, the values of each of the four properties should be changed to match the data source settings for the specific environment.
At runtime, we’ll want Spring to read the current environment’s db properties file. Fortunately, when it performs placeholder replacements, Spring will look for values not just read in by the PropertyPlaceholderConfigurer, but also system properties and VM arguments as well. So what we can do is to pass a VM argument on our startup script. This value will define the current environment. So in our dev environment we’d add this to our startup script:
    -Dcom.mycompany.env=dev
while in production we’d add this:
    -Dcom.mycompany.env=prod
Last, we adjust our PropertyPlaceholderConfigurer location to tell it which configuration file to read in.
<bean id=“propertyPlaceholderConfigurer”
class=org.springframework.beans.factory.config.PropertyPlaceholderConfigurer>
<property name=“locations”>
<list>
<value>classpath:com/mycompany/config/db-${com.mycompany.env}.properties
</value>
</list>
</property>
</bean>
The highlighted token above will be replaced with the value of com.mycompany.env set in our startup script. As a result, our PropertyPlaceholderConfigurer will read in the environment-appropriate file, and our dataSource bean will be populated with the environment-appropriate values.

Encrypt your credentials

Our second goal is to keep from bundling our database credentials in plaintext when we package and distribute our web application. To do this, we are going to override Spring’s PropertyPlaceholderConfigurer with our own implementation. This implementation will do one additional thing: when it encounters a property whose name ends with “-enc”, it will assume that the value is encrypted, and therefore decrypt the value just after reading it in.
First, let’s assume that we have written a class called DataEncrypter, containing two methods:
public String encrypt(String rawValue);

public String decrypt(String encryptedValue);

Their functions should be obvious; encrypt() takes a plaintext String and converts it to an encrypted String; decrypt() reverses the process. Both methods should of course rely on the same key(s). This tutorial will skip what an actual implementation of each method might look like; that’s for the security experts to provide. For information, look up information about the javax.crypto package. Instead, we’ll assume that a good implementation has been written.
The first thing we’ll want to do to encrypt our database credentials with our DataEncrypter instance, and put those encrypted values into our properties files. The most straightforward way to do this is to simply create a Java class with a main(String[] args) method, which uses our DataEncrypter to encrypt a value passed as args[0]:
public static void main(String[] args) {
DataEncrypter encrypter = …; // obtain an instance
System.out.println( encrypter.encrypt(args[0]) );
}
Run that class once per property that you want to encrypt; alternatively, you can modify it to expect multiple properties in args, and encrypt them all in one run. Next, we’ll swap those values in to the properties file in place of their plaintext version. We’ll also rename to property with an “-enc” suffix.
db.driver=com.mysql.jdbc.Driver
db.url=jdbc:mysql://127.0.0.1:3366/my_db
db.username-enc=f7BjAyDkWSs=
db.password-enc=4Q7xTCr5hZC9Ms6iTSjG3Q==
So how will those values be decrypted? We’ll need to create our own subclass of PropertyPlaceholderConfigurer (which we’ll call DecryptingPropertyPlaceholderConfigurer). PropertyPlaceholderConfigurer contains one important method that we’ll want to override:
protected void convertProperties(Properties props)
This method read in the properties from any file found in its list of file locations. We’ll want to look for any properties whose name ends with “-enc”, and invoke our DataEncrypter’s decrypt() method:
public class DecryptingPropertyPlaceholderConfigurer extends PropertyPlaceholderConfigurer {
private static final String SUFFIX_ENC = “-enc”;
private final DataEncrypter encrypter = …; // obtain an instance
private final Logger log = LoggerFactory.getLogger (getClass ());
@Override
protected void convertProperties(Properties props) {
// props refers to the contents of a single properties file
Enumeration<?> propertyNames = props.propertyNames();
while (propertyNames.hasMoreElements()) {
String propertyName = (String)propertyNames.nextElement();
String propertyValue = props.getProperty(propertyName);
// look for names such as password-enc
if (propertyName.endsWith(SUFFIX_ENC)) {
try {
String decValue = encrypter.decrypt(propertyValue);
propertyValue = decValue;
// normalize the property name by 
// removing the “enc-” prefix
propertyName = propertyName.substring(
0, propertyName.length() – SUFFIX_ENC());
} catch (EncryptionException e) {
log.error( String.format(
  “Unable to decode %s = %s”, propertyName, propertyValue), e);
throw new RuntimeException();
}
}
props.setProperty(propertyName, propertyValue);
}
}
}
Note that we strip the “-enc” suffix from each encoded property that is encountered. That we, we can continue to refer to our data source password, for example, as db.password rather than db.enc-password. Note also that if we encounter an error decrypting a property, we log the issue and throw a RuntimeException. This is generally the correct thing to do; we don’t want our application to run with partially-incorrect properties. One thing to note here is that you might want to remove the logging of the propertyValue. A common encryption problem is one where someone forgot to encrypt the value before adding it to the properties file. In such a case, you probably would not want the plaintext value hanging out in your log file.
The last thing to do is simply plug our subclass into our Spring config file:
<bean id=“propertyPlaceholderConfigurer”
class=“com.mycompany.DecryptingPropertyPlaceholderConfigurer”>
<property name=“locations”>
<list>
<value>classpath:com/mycompany/config/db-${com.mycompany.env}.properties
</value>
</list>
</property>
</bean>
Now, an important caveat is that this is not an end-all, be-all solution to securing your credentials. This approach is only as strong as your encryption algorithm, and it will never ensure 100% security. It’s not a substitute for simply ensuring that your source code and your webapp bundle remain only in your hands (or for taking other measures such as restricting access to your database server, not reusing passwords, etc). But should someone else get get ahold of your credentials, this approach should at least buy you enough time to change your credentials to prevent unauthorized access.

Done!

Now that you’ve set up your infrastructure, it should be easy to see how to set additional properties, whether in the same files we’d created above or in new files that you might create. Managing properties is once again very easy, but now it’s also very flexible and a tad more secure than it was before.

Changing the cell type of an NSTableColumn

This one keeps biting me, although the solution is really simple. I’ve added NSTables to NIBs before, and by default, the table cells are all of type NSTextFieldCell. Well, in some cases I want a column to be represented by a different type of cell. Fortunately, Cocoa provides a number of different types of cells, such as NSButtonCell, NSPopupButtonCell, NSComboBoxCell, NSImageCell, NSSliderCell, etc. Unfortunately, how to change a column’s cell type from the default NSTextFieldCell to one of the others isn’t so apparent.

I’ve had to do this a few times, and usually I poke around Interface Builder to see what controls it provides; first while selecting the table column, then the table column cell, then the table itself. Nothing. And then I remember: all you need to do is to find the specific cell type in IB’s Object Library, and drag it and drop it on top of the column itself (the actual column itself, not the representation in the Objects list). Viola!