Software engineers don’t need to sell themselves… right? Despite what you may think, companies don't hire every engineer that they find

As a software engineer in this market, you can land a job anywhere you please. Right?

I’ve been thinking a lot about hiring engineers lately. As I’m currently director of engineering at a medium-sized organization, I’m always on the lookout for good software engineers to woo and hire. Moreover, our company is planning to ramp up its hiring effort in the near future. It will be daunting, because the market for engineers is hot, and we’ll be facing a lot of competition from other companies that are also hiring. So that basically means that I’ll be hiring any decent engineer that comes my way, right?

Not a chance.

I’ve been on the other side on the equation, as an engineer. And I know the mentality. My skills are valuable… nay, they’re hot! Any company I’d decide to sign up with would be lucky to have me! I could walk into pretty much any company I pick and leave with a job offer!

As a hiring manager, I can tell you that that’s not true. I don’t know you. My team doesn’t know you. And during the interviewing process, we will have a limited time to get to know you. Everyone you’ve ever worked with might agree that you’re the best engineer they’ve ever come across, but you are–unfortunately–back to square one when it comes to me and my team.

Here’s another secret: with most hiring teams, when it comes to assessing your viability as a candidate, it’s not all about your technical skills. It’s not even mostly about your technical skills. Don’t believe me? The next time a candidate comes into your organization for an on-site, ask the interviewers what their main decision factors were, yay or nay. While there is generally a technical baseline that candidates must meet, the decisions almost always come down to things like:

  • Personality
  • Potential fit within the team
  • Ability to learn and adapt
  • Eagerness about the company and what it’s doing

In Building Great Software Engineering Teams, Josh Tyler describes asking his engineering team to outline their most important qualities in a candidate. While the answers varied across team members, among the top answers were that the candidate:

  • can teach the team something
  • loves programming and technology
  • is pleasant to be around

Nowhere at the top of the list was anything about being able to write out algorithms on a whiteboard.

Why should you care?

You’ve been to plenty of interviews in your career, and you’ve landed good jobs. If you were to lose your job in today (particularly if you’re living in one of the many hot engineering markets), you could probably land a job within a week or two. You’re doing fine in your interviews, so why change anything?

Well, let me let you in on another secret: hiring is not binary. Candidates are not simply grouped into yes‘s and no‘s. There are, of course, candidates that we don’t outright want to hire. Other candidates that aren’t quite at the moment, but we might consider in the future. There are of course candidates that we would like to hire.

And then there are the candidates that we need to hire.

You, of course, want to be in that last category. Because here’s yet another secret: once you’ve landed in that category, you’re in control of the hiring process. You’re much more likely to command whatever it is that you’re looking for. A higher salary. More equity. A signing bonus. Lenience in your starting date. Extra vacation. Even a higher role within the organization (which presumably, would carry higher financial incentives).

Once you’ve landed in that category, you’re in control of the hiring process.

In other words, there is a huge difference between a team’s grudging agreement to make you an offer, and a team’s insistence that it must do whatever it takes to get you onboard.

Also, consider your career trajectory. Many engineers enjoy good, comfortable careers having never risen about the level of Senior Engineer. But to get much beyond that level, non-technical skills become much more important. Competition becomes tighter, and we hiring managers become more picky.

So what should you do?

In preparing for interviews, most candidates focus on honing their technical skills. That is, of course, not a bad idea. After all, you will be assessed on them… in part. But you should focus equal time, if not more, on being able to sell yourself as a person to the interviewing team. You might have memorized how to reversed a linked list. But will you be able to convey your enthusiasm for programming? Are you prepared to demonstrate your ability to learn? Will you be able to teach your interviewers something?

Your interviewing preparation should start with one premise: You are interviewing the company just as much as they are interviewing you.

We place too much emphasis on the notion that interviews are all about the candidate attempting to prove him or herself to the interviewers, in hopes of receiving a job offer. But the inverse is also true. If you’re worth your salt as a software engineer, then you should be just as picky as they are. You should insist that the company sell itself to you.

If you adopt that mindset, everything else will start falling into place.

Be choosy about whom you interview with

In fact, this mindset should come into play before you begin talking to any companies. It should guide your decision as to which companies you’re even willing to talk to. Look for companies that are doing things that excite you. If you’re contacted by a recruiter, do some research on the company that they’re pitching. Does their work sound interesting to you? Is it an industry you know anything about, or care about, at all? Find out what technologies the company is using. Do you have any special expertise or interest in those technologies? 

In other words, how much of an intersection is there between the company and what you want to be doing?

If there is little intersection, then you’ll find yourself at a disadvantage in talking with these companies. They also, clearly, might not be the right companies for you to join in the first place. Conversely, if there is a lot of overlap, you’ve potentially found a company that you would enjoy working at, and for which you’d be better positioned to prove yourself.

Think of phone screens as conversations

You’ve found a company that is doing things that excite you. You’ve set up your first interview. More than likely this will be a so-called phone screen, maybe with a recruiter or perhaps with an hiring manager. 

What usually happens in these screens? The interviewer asks you questions, and you answer them, right? What if, instead, you asked them the questions? Not for the whole conversation, of course; certainly the interviewers have things that they want to know of you. But your answers should segue into questions for them. 

For example, if you’re a Java programmer, you might be asked the latest version of Java you’ve used. Your answer should go beyond simply stating the version number. Talk about some of the features introduced in that version. If you were involved with moving standardizing your organization on that version, tell them about that. Moreover, ask them what version they are on. If it’s not the latest version, why haven’t they upgraded? If it is the latest version, why did they upgrade?  

Ask your interviewers meaningful questions

Most interviewers will reserve some time–usually at the beginning of the session or at the end–for questions. This time is is generally very short, so you should ensure two things:

  1. you don’t squander the time asking trite questions, and
  2. you ask questions throughout the session

Don’t ask trite questions

I can’t tell you how many candidates have asked me “what’s your tech stack?” This question tells me a couple of things about the candidate.

First, that they’re not willing to apply much thought or creativity when it comes to solving a problem (the problem, in this case, being what question to ask me). Second, that they couldn’t be bothered to do the minimal amount of research that it would’ve taken to answer that question on their own.

It may be that you truly want to talk about the company’s tech stack, if there is something noteworthy that ties to your own experience or interests. Here are some better ways to broach the subject:

  • “I noticed that you use Go as well as Node.js. I’ve mixed both of them in projects as well. What are the use cases here for each of them?”

or

  • “I’ve heard that RxJava is used here. That’s exciting to me, since I’ve been using it on some of my own projects at home. Why was it adopted here?”

Remember, this is your chance to convey your overall love of technology and excitement at the prospect of working for the company. 

Ask questions throughout the session

Interviewing sessions are usually short. With any given interviewer, you’ll likely have somewhere between thirty to ninety minutes to make an impression. During this time, the interviewer will, naturally, do most of the question-asking. And of course, you should answer all of the questions directly and fully. Ducking questions and avoiding what they’re asking never comes across well.

But remember, you are also learning about them during that period of time. So after answering a question, consider pivoting to ask them a similar question. For example, if you’re asked about your experience with building microservices, you might end your answer by asking things like, “How did this company get started building microservices?” or “How are you handling errors that occur during event consumption?”

This is an effective way to show that you are interested in what the company is doing, and to tie your interests and experience to what the company needs in a candidate.

Learn something

Most interviewers also want to be sure that you’re willing and able to learn things. This starts with asking good questions, and continues with attentive listening to the answers to those questions. Repeat back to your interviewers what you’ve been told, paraphrased, to show that you’ve understood.

I took an improv comedy class once. Comedians that perform improv sketches (so I was told, anyway) rely on the notion of “Yes, and”. This refers to how the comedians interact with each other. One comedian will start acting out a certain topic. The next comedian will join in and say (figuratively if not literally) “yes, and…” and continue the first comedians line of thought.

While I wouldn’t suggest breaking into comedy during an interview, I highly recommend adopting this mentality. Listen to what your interviewer has to say, demonstrate that you’ve heard them, and then build on what they’ve said.

For example…

Them: “Our company has been deploying ReST services for awhile, but we’ve started running into problems when we need to modify our APIs.”

You: “Yes, it’s easy to run into issues when making API changes to services that are out in production. I’ve found that the ‘expand and contract’ pattern  can help with this…”

In this way, you’ve learned a bit about the company (don’t forget, you’re there to interview them too!), you’ve shown that you’ve listened and learned, and you’ve set yourself up for your next objective…

Teach them something

Most interviewers are also looking for candidates that they themselves, in turn, can learn from. So make it a point to teach something to each of your interviewers. 

This can happen as a natural extension of asking intelligent questions. Per the example in the previous question, you might ask your interviewer how its company handles errors during event consumption between microservices. Depending on your interviewer’s answer, you’ve got a pretty good opportunity to tell them how you have solved the problem in the past, and make suggestions as to how they might approach the problem.

Of course, a typical engineering interview session will involve the interviewer asking a technical question, handing you a dry-erase marker, and pointing you to a whiteboard. Often you’ll hear advice that you should keep talking through your solution. This is sound advice. But if you’re focused on merely talking, it’s easy to digress into muttering to yourself.

Instead, picture yourself as the teacher–you are, after all, standing at a whiteboard with a pen and an attentive audience! Talk to your interviewer as if you’re explaining to them how to solve the problem. This shift in attitude will ensure that you’re talking to them, and will help you project a level of confidence and authority.

It’s a different mindset, and takes practice

Thinking of an interview as a conversation that you help steer… well, that takes a while to get used to. But it’s the mindset that will help convey that you’re the engineer that companies not just want–but need–to hire.

CANWEALLAGREETHATWEWILLAIMFORCLARITYASWENAMECLASSES? CanWeAllAgreeThatWeWillAimForClarityAsWeNameClasses?

I like to be productive and efficient. So anything that unnecessarily wastes even just a few moments of my time… well, it really bugs me.

That is why I am particularly bugged by a particular practice (nay, an anti-practice) that I see too often. FYI, AFAIK most folks would agree with me, so IDK why this anti-practice is so pervasive.

I’m talking about CamelCased class names with capitalized acronyms.

A camel.
A camel. See if you can tell how this animal inspired the term “CamelCase”.

You see these fairly often in Java. EOFException. URLEncoder. ISBNValidator. Mercifully, the good folks who designed the java.net package decided to use Http rather than HTTP in class names, so we’re not cursed with the likes of HTTPURLConnection. Not so with Apple, however, as anyone who’s worked with Objective-C, or even Swift, can attest. The two-to-three-uppercase-letter classname prefix standard (for example, NSThis, NSThat, NSTheOther) is bad enough. But when you couple that with Apple’s overzealous penchant for capitalizing acronyms, you wind up reading AFJSONRequestOperations from your NSHTTPURLResponses.

A strange camel
The weird-ass camel that some language designers have apparently once seen.

SMH.

Though they’re rare, I’ve encountered programmers who voraciously favor the all-caps approach. The rationale is usually that it is, simply, proper grammar. And that’s fair enough. But guess what? It’s also proper grammar to put spaces between words, and to lowercase the first letter of non-proper nouns (unless your nouns are German). Yet we programmers happily break those grammatical decrees on a daily basis.

Instead of being GrammarSticklers, we should be crafting class names to convey the classes’ meanings as clearly and quickly as possible. It’s not that these class names are illegible. It’s simply that they are less legible than they should be. It takes an extra beat or two to understand them. And I don’t know about you, but I don’t like wasting beats.

Am I overreacting? IDK, maybe I am. But thanks for bearing with me anyway. I won’t waste any more of your time.

Collections and Encapsulation in Java Never, ever return null when you're supposed to be returning a Collection.

A core tenet of object oriented programming is encapsulation: callers should not have access to the inner workings of a class. This is something that newer languages such as Kotlin, Swift, and Ceylon have solved well with first-class properties.

Java—having been around for quite awhile—does not have the concept of first-class properties. Instead, the JavaBeans spec was introduced as Java’s method of enforcing encapsulation. Writing JavaBeans means that you need to make your class’s fields private, exposing them only via getter and setter methods.

If you’re like me, you’ve often felt when writing JavaBeans that you were writing a bunch of theoretical boilerplate that rarely served any practical purpose. Most of my JavaBeans have consisted of private fields, and their corresponding getters and setters that do nothing more than, well, get and set those private fields. More than once I’ve been tempted to simply make the fields public and dispense with the getter/setter fanfare, at least until a stern warning from the IDE sent me back, tail between my legs, to the JavaBeans standard.

Recently, though, I’ve realized that encapsulation and the JavaBean/getter/setter pattern is quite useful in a common scenario: Collection-type fields. How so? Let’s fabricate a simple class:

public class MyClass {

    private List<String> myStrings;

}

We have a field—a List of Strings—called myStrings, which is encapsulated in MyClass. Now, we need to provide accessor methods:

public class MyClass {

    private List<String> myStrings;

    public void setMyStrings(List<String> s) {
        this.myStrings = s;
    }

    public List<String> getMyStrings() {
        return this.myStrings;
    }

}

Here we have a properly-encapsulated—if not verbose—class. So we’ve done good, right? Hold that thought.

Optional lessons

Consider the Optional class, introduced in Java 8. If you’ve done much work with Optionals, you’ve probably heard the mantra that you should never return null from a method that returns an Optional. Why? Consider the following contrived example:

public class Foo {

    private String bar;

    public Optional<String> getBar() {
        return (bar == null) ? null : Optional.of(bar);
    }

}

Now clients can use the method thusly:

foo.getBar().ifPresent(log::info);

and risk throwing a NullPointerException. Alternatively, they could perform a null check:

if (foo.getBar() != null) {
    foo.getBar().ifPresent(log::info);
}

Of course, doing that defeats the very purpose of Optionals. In fact, it so defeats the purpose of Optionals that it’s become standard practice that any API that returns Optional will never return a null value.

Back to Collections. Much like an Optional contains either none or one, a Collection contains either none or some. And much like Optionals, there should be no reason to return null Collections (except maybe in rare, specialized cases, of which I can’t currently think of any). Simply return an empty (zero-sized) Collection to indicate the lack of any elements.

Sock drawer
The drawer still exists even when all the socks are removed, right?

It’s for this reason that it’s becoming more common to ensure that methods that return Collection types (including arrays) never return null values, the same as methods that return Optional types. Perhaps you or your organization have already adopted this rule in writing new code. If not, you should. After all, would you (or your clients) rather do this?:

boolean isUnique = personDao.getPersonsByName(name).size() == 1;

Or have your code littered with the likes of this? :

List<Person> persons = personDao.getPersonsByName(name);
boolean isUnique = (persons == null) ? false :persons.size() == 1;

So how does this relate to encapsulation?

Keeping Control of our Collections

Back to our MyClass class. As it is, an instance of MyClass could easily return null from the getMyStrings() method; in fact, a fresh instance would do just that. So, to adhere to our new never-return-a-null-Collection guideline, we need to fix that:

public class MyClass {

    private List<String> myStrings = new ArrayList<>();

    public void setMyStrings(List<String> s) {
        this.myStrings = s;
    }

    public List<String> getMyStrings() {
        return this.myStrings;
    }

}

Problem solved? Not exactly. Any client could call aMyClass.setMyStrings(null), in which case we’re back to square one.

At this point, encapsulation sounds like a practical—rather than solely theoretical—concept. Let’s expand the setMyStrings() method:

public void setMyStrings(List<String> s) {
    if (s == null) {
        this.myStrings.clear();
    } else {
        this.myStrings = s;
    }
}

Now, even when null is passed to the setter, myStrings will retain a valid reference (in the example here, we take null to mean that the elements should be cleared out). And of course, calling aMyClass.getMyStrings() = null will have no effect on aMyClass’ underlying myStrings variable. So are we all done?

Er, well, sort of. We could stop here. But really, there’s more we should do.

Consider that we are replacing our private ArrayList with the List passed to us by the caller. This has two problems: first, we no longer know the exact List implementation used by myStrings. In theory, this shouldn’t be a problem, right? Well, consider this:

myClass.setMyStrings(Collections.unmodifiableList("Heh, gotcha!"));

So if we ever update MyClass such that it attempts to modify the contents of myStrings, bad things can start happening at runtime.

The second problem is that the caller retains a reference to our underlying List. So now, that caller can now directly manipulate our List.

What we should be doing is storing the elements passed to us in the ArrayList to which myStrings was initialized. While we’re at it, let’s really embrace encapsulation. We should be hiding the internals of our class from outside callers. The reality is that callers of our classes shouldn’t care whether there’s an underlying List, or Set, or array, or some runtime dynamic code-generation voodoo, that’s storing the Strings that we pass to it. All they should know is that Strings are being stored somehow. So let’s update the setMyStrings() method thusly:

public void setMyStrings(Collection<String> s) {
    this.myStrings.clear(); 
    if (s != null) { 
        this.myStrings.addAll(s); 
    } 
}

This has the effect of ensuring that myStrings ends up with the same elements contained within the input parameter (or is empty if null is passed), while ensuring that the caller doesn’t have a reference to myStrings.

Now that myStrings‘ reference can’t be changed, let’s just make it a constant:

public class MyClass {
    private final List<String> myStrings = new ArrayList<>();
    ...
}

While we’re at it, we shouldn’t be returning our underlying List via our getter. That too would leave the caller with a direct reference to myStrings. To remedy this, recall the “defensive copy” mantra that Effective Java beat into our heads (or, at least, should have):

public List<String> getMyStrings() {
    // depending on what, exactly, we want to return
    return new ArrayList<>(this.myStrings);  
}

At this point, we have a well-encapsulated class that eliminates the need for null-checking whenever its getter is called. We have, however, taken some control away from our clients. Since they no longer have direct access to our underlying List, they can no longer, say, add or remove individual Strings. 

No problem. If we can simply add methods like

public void addString(String s) {
    this.myStrings.add(s);
}

and

public void removeString(String s) { 
    this.myStrings.remove(s); 
}

Might our callers need to add multiple Strings at once to a MyClass instance? That’s fine as well:

public void addStrings(Collection<String> c) {
    if (c != null) {
        this.myStrings.addAll(c);
    }
}

And so on…

public void clearStrings() {
    this.myStrings.clear();
}

public void replaceStrings(Collection<String> c) {
    clearStrings();
    addStrings(c); 
}

Collecting our thoughts

Here, then is what our class might ultimately look like:

public class MyClass {

    private final List<String> myStrings = new ArrayList<>();

    public void setMyStrings(Collection<String> s) {
        this.myStrings.clear(); 
        if (s != null) { 
            this.myStrings.addAll(s); 
        } 
    }

    public List<String> getMyStrings() {
        return new ArrayList<>(this.myStrings);
    }

    public void addString(String s) { 
        this.myStrings.add(s); 
    }

    public void removeString(String s) { 
        this.myStrings.remove(s); 
    }

    // And maybe a few more helpful methods...

}

With this, we’ve achieved a class that:

  • is still basically a POJO that conforms to the JavaBean spec
  • fully encapsulates its private member(s)

and most importantly ensures that its method that returns a Collection always does just that–returns a Collection–and never returns null.

 

 

Engineer, promote thyself Increasing your value as a software engineer, two steps at a time

Most of us chose software engineering as a career because we love what we do. Few other fields can reliably give us the satisfaction that writing software does. But there comes a time when you as an engineer will need to decide whether you’ll be content with a career as an individual contributor, or whether you want to advance in your role, taking on more responsibilities and more interesting challenges (and, or course, making more money).

If you plan to stick with simply writing code until you retire, then this article is probably not for you. But if you’re reaching the point where you want to expand your career beyond the individual contributor level, then read on.

Face it: you need to market yourself

Many of us (myself included) have become spoiled by today’s job market. Software engineers–even mediocre ones–tend to be in high demand. But the fact is, plenty of companies can still afford to be picky about who they hire, particularly when it comes to technical leadership roles. In order to compete for the best roles at the best companies, we need to make ourselves more valuable. But being more valuable is not not enough. As they say, perception is everything. So we need to do something that most of us aren’t comfortable with: marketing ourselves. In other words, we need not only to be the most valuable, but also to convince our future employers that we are.

Below are some ideas about how to go beyond simply writing code for a living. All of these ideas have two distinct advantages. First, they show the rest of the world that you know what you’re talking about. That you are thoughtful about what you do, and that you can articulate your knowledge. In other words, they make you appear more valuable.

Second, they force you to sharpen your skills. They ensure that you’ve considered different points of view and are still certain that yours is the best. They help you stretch your skills beyond writing code, improving your research and communication skills. Many of them also open the door to networking with folks who are well-entrenched in the industry. In other words, they make you become more valuable.

Writing

Start a blog

You’ve probably already written a lot about programming. Maybe you’ve contributed to your company’s wiki page. Or you’ve replied to questions on StackOverflow. Maybe you’ve written notes to help yourself remember how to solve a particularly vexing issue. Or you’ve written an email to another engineer explaining a particular programming topic.

There is no reason why you can’t apply those same skills and write some articles. For starters, you can create your own blog. There is some overhead to creating a blog, but not much. If you have already have an account with a cloud provider like AWS, then firing up a new instance to host a WordPress site is trivial. Even easier is to simply set up an account with a site like Blogger. Any time you figure out something interesting, blog about it. 

You will of course want to spend a bit more time on a publicly-facing article than you would otherwise. Proofread it more carefully. Let it set for a day or two and re-read it to make sure it makes sense, and edit it as necessary. Maybe have a trusted friend or colleague look it over as well.

Once you’ve posted it, you’ll want to market it. Depending on the topic you write about, you may simply get traffic directly from organic online searches. Beyond that, you’ll want to be sure to that you’re linking to your articles from anywhere else that you have an online presence: Twitter, LinkedIn, etc.

Publish articles on other sites

Once you’ve sharpened your writing skills, you can try publishing your articles on existing online engineering publications. You might think that articles on the likes of JavaWorld and DZone are penned by professional writers, but that’s not the case. In fact, most articles on these sites are written by software engineers like you, and most of these publications allow you to submit articles.

Getting an article published in such a publication is a little (but not much) more work than writing for your own blog. For starters, there’s no guarantee that your writing will be accepted. You’ll want to be even more scrupulous with your writing if you choose this route. Different publications will have different guidelines that you’ll need to follow. Some require you to first submit a proposal before even submitting your writing at all. But they provide a large built-in audience, and carry with them a certain amount of prestige.

Another option is to get yourself involved in engineering communities that interest you. Show enough interest and expertise, and you may be invited to contribute. As an example, I became interested in the Vert.x framework, so I began participating in its discussion forums, and occasionally tweeted about it. After one tweet about how quickly my colleague and I were able to use the framework, I was asked by a Zero Turnaround evangelist–who was also active in the Vert.x community–to write an article about my experience.

Write a book

Depending on your area of expertise, and what ideas you might have, writing a book can also be an option. It’s a time-consuming effort, for sure, but one that can pay off in spades in terms of establishing yourself as a subject matter expert. These days, you have a couple of options:

  • Sell your idea to a publisher like O’Reilley or Manning
  • Self-publish your book
Working with a publisher

Granted, you’ll generally want to start with a novel concept; most publishers will already have authors lined up for How to Program with Java 12, for example. But you’ll be surprised at how easy it is to pitch a niche topic. I was once in a conversation with a publisher from Manning, when he asked me about any topics that I thought would make a good book. As a Java developer who had recently spent time learning Objective-C (this being before Swift came along) I mentioned that Objective-C for Java Developers would have made my life easier. A few days later the publisher contacted me and asked if I’d like to write that book.

While I didn’t wind up writing that book (I did receive a contract from the publisher, but decided to focus my energy on the startup company I’d just joined) in retrospect I often wish that I had. While I’m certain I would’ve made little direct money, I’ve heard from other tech authors about the boosts that they’ve seen in their career. They garner more respect from other people, and in general find it easier to do what they want with their careers, be it getting the job they really want, speaking or publishing more, commanding more money, etc. And along the way, they’ve sharpened their skills and researched their area of expertise far more than they would’ve otherwise.

Working with a publisher has its drawbacks. For example, most publishers have a style in which they want their books written. You’ll work closely with editors, iterating and rewriting. While this will typically result in a much better-written text, it can be time consuming at, at times, frustrating, as you’ll be forced to relinquish much of your creative control to someone else. Typically, while publishers will often offer you a monetary advance as you’re writing your book, you’ll wind up make a small fraction for each book sale.

Self-Publishing

By contrast, it’s become quite easy these days to write and publish your own book. Publishing platforms like LeanPub allow you to focus on writing your book the way you want to write it. They handle the mechanics for you: formatting and publishing, promotion, payments, etc. Most also make it simple to integrate with publishers of physical books (such as Amazon’s CreateSpace). And you’ll be able to keep a much larger percentage of each book sale. Plus, while a publisher can turn your idea down, no one will prevent you from self-publishing.

Of course, when it comes to actually writing the book, you’re on your own. You’ll have no editor guiding you through the writing process (which can be a positive as well as a negative). You’ll be forced to create, and stick with, your own writing schedule (again, this can be a pro or a con). The differences can be summed up like so:

  • Guarantee: First and foremost, self-publishing guarantees that you’ll be able to write about what you want to write about. Established publishers might turn you and your idea down.
  • Prestige: While both options go a long way towards marketing yourself, there is a certain amount of added prestige when working with an established publisher.
  • Control: When self-publishing, you retain control. You write what you want, according to your own schedule. Publishers will insist that you work with an editor, and that you follow their timeline.
  • Monetary: Neither option is likely to make you rich–at least, not directly–so this should not be your primary motivation. With that said, self-publishing allows you to keep a higher percentage of books sold, while working with a publisher generally garners you an up-front advance. Also, while a publisher will keep more money per book, they are likely to be able to increase the overall number of books sold.

Speaking

Speaking is another great way to market yourself and hone your skills. If you haven’t done much speaking before, it can be intimidating to just step in front of hundreds of strangers and start talking. However, there are a number of baby steps that you can take.  

Speak at your own company

A great way to build speaking skills and confidence is to start with a small, friendly audience. Assuming you work at a company with other people (engineers or otherwise), giving a technical presentation is a great way to get into public speaking. You first question might be What should I talk about? My advice is to pick from one of the following:

  1. A topic that no one else in the company knows much about, but is important for people to understand.
  2. A topic that other people in the company may be familiar with, but that you in particular excel at

For item #1 above, often you’ll find interesting topics at the at the periphery of programming that have been neglected at your company. Maybe no one knows how to write a good integration test. Or maybe monitoring and tracing might be something your company hasn’t gotten to yet. Take it upon yourself to research and learn the topic, well enough that you can explain it to your colleagues.

For item #2, is there anything for which your colleagues regularly rely on you? This could be anything from git, to specific frameworks, to patterns and best practices. Be sure to outline your talk first. Don’t plan to just wing it. At the same time, don’t fully script your talk. Give yourself permission to ad lib a little bit, and to adjust a bit (say, go deeper into one particular topic, or to pull back on–or even skip entirely–other topics) if you feel that you need to.

Find a meet up to speak at

Engineering meet ups have become extremely common lately. The problem is, lots of engineers want to attend meet ups, but few want to speak. As a result, organizers are often searching for presenters. You can use that to your advantage.

If you haven’t already, find a few good local meet ups to attend (in and of itself, it’s a great way to network and to explore stuff that you don’t typically work with). Get to know the organizers. Then, once you’ve gotten a few talks down at your own company, you can volunteer to present at one of their meet ups. Odds are they’ll be thrilled to take you up on your offer. What should you talk about at a meet up? I’ve been to some great meet ups that have fallen under the following categories:

  1. Deep dives into commonly-used technologies
  2. Introductions to new/emerging technologies
  3. Novel or interesting applications of a given technology

Ordinarily, talks falling under the first category are best given by those most familiar with the technologies. Since they are commonly-used, then almost by definition, the audience will be filled with folks that already know a fair amount about the given technology. For example, I’ve been to a talk about RxJava, given by the team from Netflix that ported the reactiveX framework from .Net to Java. I’ve also attended a talk about Hazelcast… from one of the main Hazelcast developers.

The second category, however, is a different story. If you’ve become familiar with a new technology that has yet to gain widespread use, then you’re uniquely positioned to provide an overview of a technology that few people are using. A year or so ago, for example, I attended an interesting talk about Kotlin given by members of an Android engineering team. The team had no special association with Kotlin itself, other than having adopted the language awhile back. Yet their presentation was well-attended and well-received.

If you’ve made interesting use of a certain technology, then you also have a great opportunity to present. Another interesting talk I’d gone to was given by an engineer whose team had used the Vert.x framework to create a novel in-memory CQRS product.

Conferences

If you’ve attended tech conferences, it can seem as though only engineers who are top in their field are invited to speak. But that’s simply not true. Even more than meet up organizers, conference organizers need to flesh out their conferences with a variety of speakers.

When submitting a conference proposal, however, the topic you plan to discuss becomes very important. While not everyone who speaks is an industry leader, chances are that organizers will be more particular about who presents general topics. Come up with a good niche topic, however, and you’ve got a good chance of being invited to speak. In other words, a talk on What’s new in Spring Framework 6 will likely go to an actual Spring committer. But a talk like, say, Rendering Serverside Graphics Using Websockets and LWJGL, would be open game.

How would you submit a talk? Generally, conferences have a Call For Papers (CFP), which is a period of time in which they are soliciting ideas for sessions. Check the websites of the conferences you want to target for the dates of their CFPs. You’ll find a lot of advice online about how to craft a submission, but common tips include:

  • Submit early in the process
  • Pay attention to your session’s title and make sure it’s both accurate and enticing
  • Don’t pitch a session that attempts to sell a product
  • Be sure you thoroughly understand–and follow–the speaker guidelines

When to use Abstract Classes Abstract classes are overused and misused. But they have a few valid uses.

Abstract classes are a core feature of many object-oriented languages, such as Java. Perhaps for that reason, they tend to be overused and misused. Indeed, discussions abound about the overuse of inheritance in OO languages, and inheritance is core to using abstract classes. 

 

In this article we’ll use some examples of patterns and anti-patterns to illustrate when to use abstract methods, and when not to. 

 

While this article presents the topic from a Java perspective, it is also relevant to most other object-oriented languages, even those without the concept of abstract classes. To that end, let’s quickly define abstract classes. If you already know what abstract classes are, feel free to skip the following section.

Defining Abstract Classes

Technically speaking, an abstract class is a class which cannot be directly instantiated. Instead, it is designed to be extended by concrete classes which can be instantiated. Abstract classes can—and typically do—define one or more abstract methods, which themselves do not contain a body. Instead, concrete subclasses are required to implement the abstract methods.

 

Let’s fabricate a quick example:
public abstract class Base {

    public void doSomething() {
        System.out.println("Doing something...")
    }

    public abstract void doSomethingElse();

}
Note that doSomething()–a non-abstract method–has implemented a body, while doSomethingElse()–an abstract method–has not. You cannot directly instantiate an instance of Base. Try this, and your compiler will complain:
Base b = new Base();
Instead, you need to subclass Base like so:
public class Sub extends Base {

    public abstract void doSomethingElse() {
        System.out.println("Doin' something else!");
    }

}
Note the required implementation of the doSomethingElse() method.
 
Not all OO languages have the concept of abstract classes. Of course, even in languages without such support, it’s possible to simply define a class whose purpose is to be subclassed, and define either empty methods, or methods that throw exceptions, as the “abstract” methods that subclasses override.

The Swiss Army Controller

Let’s examine a common abuse of abstract classes that I’ve come across frequently. I’ve been guilty of perpetuating it; you probably have, too. While this anti-pattern can appear nearly anywhere in a code base, I tend to see it quite often in Model-View-Controller (MVC) frameworks at the controller layer. For that reason, I’ve come to call it the Swiss Army Controller.

 

The anti-pattern is simple: A number of subclasses, related only by where they sit in the technology stack, extend from a common abstract base class. This abstract base class contains any number of shared “utility” methods. The subclasses call the utility methods from their own methods.

 

Swiss army controllers generally come into existence like this:

 

  1. Developers start building a web application, using an MVC framework such as Jersey.
  2. Since they are using an MVC framework, they back their first user-oriented webpage with an endpoint method inside a UserController class.
  3. The developers create a second webpage, and therefore add a new endpoint to the controller. One developer notices that both endpoints perform the same bit of logic—say, constructing a URL given a set of parameters—and so moves that logic into a separate constructUrl() method within UserController.
  4. The team begins work on product-oriented pages. The developers create a second controller, ProductController, so as to not cram all of the methods into a single class.
  5. The developers recognize that the new controller might also need to use the constructUrl() method. At the same time, they realize hey! those two classes are controllers! and therefore must be naturally related. So they create an abstract BaseController class, move the constructUrl() into it, and add extends BaseController to the class definition of UserController and ProductController.
  6. This process repeats until BaseController has ten subclasses and 75 shared methods.

Now there are a ton of useful methods for the concrete controllers to use, simply by calling them directly. So what’s the problem?

 

The first problem is one of design. All of those different controllers are in fact unrelated to each other. They may live at the same layer of our stack, and may perform a similar technical role, but as far as our application is concerned, they serve different purposes. Yet we’ve now locked them to a fairly arbitrary object hierarchy.

 

The second is more practical. You’ll realize it the first time you need to use one of the 75 shared methods from somewhere other than a controller, and you find yourself instantiating a controller class to do so.
String url = new UserController().constructUrl(key, value);
You’ll have created a trove of useful methods which now require a controller instance to access. Your first thought might be something like, hey, I can just make the method static in the controller, and use it like so:
String url = UserController.constructUrl(key, value);
That’s not much better, and actually, a little worse. Even if you’re not instantiating the controller, you’ve still tied the controller to your other classes. What if you need to use the method in your DAO layer? Your DAO layer should know nothing about your controllers. Worse, in introducing a bunch of static methods, you’ve made testing and mocking much more difficult.

 

It’s important to emphasize the interaction flow here. In this example, a call is made directly to one of the concrete subclasses’ methods. Then, at some point, this method calls in to one or more of the utility methods in the abstract base class.

 

 

In fact, in this example there was never a need for an abstract base controller class. Each shared method should have either been moved to its appropriate service-layer classes (if it takes care of business logic) or to a utility classes (if it provides general, supplementary functionality). Of course, as mentioned above, the utility classes should still be instantiable, and not simply filled with static methods.
 
Now there is a set of utility methods that is truly reusable by any class that might need them. Furthermore, we can break those methods into related groups. The above diagram depicts a class called UrlUtility which might contain only methods related to creating and parsing URLs. We might also create a class with methods related to string manipulation, another with methods related to our application’s current authenticated user, etc.

 

Note also that this approach also fits nicely with the composition over inheritance principal.

 

Inheritance and abstract classes are a powerful construct. As such, numerous examples abound of its misuse, the Swiss Army Controller being a common example. In fact, I’ve found that most typical uses of abstract classes can be considered anti-patterns, and that there are few good uses of abstract classes.

The Template Method

With that said let’s then look at one of the best uses, described by the template method design pattern. I’ve found the template method pattern to be one of the lesser known–but more useful–of the design patterns out there.

 

You can read about how the patterns works in numerous places. It was originally described in the Gang of Four Design Patterns book; many descriptions can now be found online. Let’s see how it relates to abstract classes, and how it can be applied in the real world.

 

For consistency, I’ll describe another scenario that uses MVC controllers. In our example, we have an application for which there exist a few different types of users (for now, we’ll define two: employee and admin). When creating a new user of either type, there are minor differences depending on which type of user we are creating. For example, assigning roles needs to be handled differently. Other than that, the process is the same. Furthermore, while we don’t expect an explosion of new user types, we will from time to time be asked to support a new type of user.

 

In this case, we would want to start with an abstract base class for our controllers. Since the overall process of creating a new user is the same regardless of user type, we can define that process once in our base class. Any details that differ will be relegated to abstract methods that the concrete subclasses will implement:
public abstract class BaseUserController {

    // ... variables, other methods, etc

    @POST
    @Path("/user")
    public UserDto createUser(UserInfo userInfo) {
        UserDto u = userMapper.map(userInfo);
        u.setCreatedDate(Instant.now());
        u.setValidationCode(validationUtil.generatedCode());
        setRoles(u);  // to be implemented in our subclasses
        userDao.save(u);
        mailerUtil.sendInitialEmail(u);
        return u;
    }

    protected abstract void setRoles(UserDto u);

}
Then we need simply to extend BaseUserController once for each user type:
@Path("employee")
public class EmployeeUserController extends BaseUserController {

    protected void setRoles(UserDto u) {
        u.addRole(Role.employee);
    }

}
@Path("admin")
public class AdminUserController extends BaseUserController {

    protected void setRoles(UserDto u) {
        u.addRole(Role.admin);
        if (u.hasSuperUserAccess()) {
            u.addRole(Role.superUser);
        } 
    }

}
Any time we need to support a new user type, we simply create a new subclass of BaseUserController and implement the setRoles() method appropriately.

 

Let’s contrast the interaction here with the interaction we saw with the swiss army controller.
Using the template method approach, we see that the caller (in this case, the MVC framework itself–responding to a web request–is the caller) invokes a method in the abstract base class, rather than the concrete subclass. This is made clear in the fact that we have made the setRoles() method, which is implemented in the subclasses, protected. In other words, the bulk of the work is defined once, in the abstract base class. Only for the parts of that work that need to be specialized do we create a concrete implementation.

A Rule of Thumb

I like to boil software engineering patterns to simple rules of thumb. While every rule has its exceptions, I find that it’s helpful to be able to quickly gauge whether I’m moving in the right direction with a particular design.

 

It turns out that there’s good rule of thumb when considering the use of an abstract class. Ask yourself, Will callers of your classes be invoking methods that are implemented in your abstract base class, or methods implemented in your concrete subclasses?
  • If it’s the former–you are intending to expose only methods implemented in your abstract class–odds are that you’ve created a good, maintainable set of classes.
  • If it’s the latter–callers will invoke methods implemented in your subclasses, which in turn will call methods in the abstract class–there’s a good chance that an unmaintainable anti-patten is forming.
 
 
 
 
 
 

Becoming Comfortable with the Uncomfortable Being a strong software engineering professional sometimes has nothing to do with software or engineering at all.

It’s comfortable to complain.

Most of us learned this the moment we were born. As babies, we quickly discovered that crying, loudly, got us what we wanted. A bit later in life, we applied these same lessons for ice cream, TV watching time, and staying up just a little bit later. As teenagers, well, complaining was just the thing to do.

Even as fledgling software engineers, we’ve gotten rewarded for complaining. Pointing out what’s wrong with an organization–its practices, its codebase, etc–initially got us positive attention. We should be using constants instead of hard-coded values in this class? Well done! We’ll assign someone to fix that. Our stand-ups should be moved from 9:45 to 10:00 in case people get in late? Good point. We’ll see about moving them. We should have detailed diagrams of our service architecture on our Wiki? Uh, sure… someone will get to it when they have time.

Because, let’s face it: complaining is easy. But pointing out flaws will only get you so far. At some point, someone needs to address those flaws. And before too long, you may find that your complaints are becoming a liability, and that it’s time to stop grumbling about problems and start doing something about them.

And if you’re like most of us, you’ll pick something comfortable. You write code, after all, so why not find some code to fix? You’ll spend an afternoon refactoring a class to make it more testable. You’ll spend an evening applying the command design pattern to some data access objects. Heck, maybe you’ll spend a weekend coding up a little framework to support your new command-design-pattern-data-access code (and, maybe, someone will actually use it). You’ll show everyone your work, and they’ll tell you what a good job you’ve done. And you’ll feel great, confident that you’ve shown everyone that you’re willing to do whatever it takes to help the organization succeed.

Except for one problem: you haven’t done that all. You’ve shown that you can create fun, interesting coding projects and do them. And before too long, folks will start getting tired of seeing the latest pet project you’d spent your time on while they’d been slogging away at un-fun, uninteresting tasks.

So what’s an enterprising engineer, who truly wants to solve problems and prove their worth, to do?

The answer is easy: find something that’s hard. Something that everyone knows needs to get done, but no one wants to do. Something that’s outside of your comfort zone. Maybe even so far out that you don’t know how to get it done.

In 2015 I joined a company, a Java shop, that was running its applications on Java 7. As I’d been developing on Java 8 for about a year at that point, I was a little disappointed. So naturally I made a glib comment to my new co-workers, and received some knowing grumbles in return. This, or some variation of it, happened for the next few months. Someone would ask in an engineering meeting when we would finally be moving to Java 8. The answer would always be the same: someone has to step up and make it happen.

Finally, one day, I decided that I would be that someone. I knew the task would be tough. I had a full workload, of course, so this effort would be above and beyond what I was hired to do, especially given that I was taking it on voluntarily. There was a lot of risk involved. Even if it went smoothly, we were bound to run into build issues from time to time, and everyone would know who to blame. And in the worst case scenario, I could introduce insidious runtime issues that would reveal themselves early some morning in production.

Plus, I flat out didn’t know how to do it. Installing Java 8 on my own Mac was one thing. But getting the entire organization–its various development, QA, and production environments; monolithic applications and microservices; homegrown libraries and frameworks–all upgraded? I mean, I’m a software architect, not a sys admin!

But I figured the flip side would be that I’d provide a huge service to the organization, modernizing it, and making a number of engineers happy. Besides, I wanted to use streams and lambdas, dammit!

So I announced that I would lead the effort. And it was indeed a large task. I researched the issues commonly encountered by companies doing the same thing. I created a detailed list of dependencies, which drove the order in which applications were to be upgraded. I recruited members of the devops and QA teams, as well as members from individual engineering scrum teams, to be a part of the effort. And indeed, as the project progressed, more than one engineering team ran into vexing build issues.

But most surprising to me was that everybody was fully on board. Despite the effort involved, despite the road bumps encountered, every last person involved was willing to pitch in and help push through. I noticed that the wiki page I’d set up to document common issues and solutions started flourishing with input from other teams. Within a few months, every system in the company was running on Java 8. Years later, I would still receive the occasional word of thanks from engineers in the organization.

Later in the same company, I found myself–along with a number of engineers and engineering managers–complaining about various recruiting and interviewing policies that the engineering organization had adopted. The answer came back: “Someone else needs to come up with better solutions to bringing in good engineering candidates.” I paused for a beat, then agreed: if I’m complaining about it, then I should be willing to find a solution. Again, I didn’t have any particular love for, or skills with, the topic at hand. I certainly was not a recruiter. But it was something that nearly everyone agreed needed to be done. So I assembled a series of agendas, and pulled together small teams to tackle each one. And slowly, we began to improve our recruiting a hiring process.

Since then, I’ve simply taken it for granted that if I identify a legitimate pain point in the organization, it’s my job as a member of that organization to help solve it–whether it has anything to do with software or not. As I spend less time writing code and more time in management and leadership roles, of course, I find myself tasked with a variety of non-engineering issues on a daily basis.

But even if you plan to write code for a living until the day you die (or, more optimistically, retire) taking on non-technical challenges is always a good idea. You’ll boost your own confidence, and your stature within the organization. You’ll get to work with different people, and learn some new things.

And you’ll find yourself with fewer things to complain about.

 

Using Vert.x to Connect the Browser to a Message Queue

Introduction

Like many of my software-engineering peers, my experience has been rooted in traditional Java web applications, leveraging Java EE or Spring stacks running in a web application container such as Tomcat or Jetty. While this model has served well for most projects I’ve been involved in, recent trends in technology–among them, microservices, reactive UIs and systems, and the so-called Internet of Things–have piqued my interest in alternative stacks and server technologies.

Happily, the Java world has kept up with these trends and offers a number of alternatives. One of these, the Vert.x project, captured my attention a few years ago. Plenty of other articles extol the features of Vert.x, which include its event loop and non-blocking APIs, verticles, event bus, etc. But above all of those, I’ve found in toying around with my own Vert.x projects that the framework is simply easy to use and fun.

So naturally I was excited when one of my co-workers and I recognized a business case that Vert.x was ideally suited for. Our company is a Java and Spring shop that is moving from a monolithic to a microservices architecture. Key to our new architecture is the use of an event log. Services publish their business operations to the event log, and other services consume them and process them as necessary. Since our company’s services are hosted on Amazon Web Services, we’re using Kinesis as our event log implementation.

Our UI team has expressed interest in being able to enable our web UIs to listen for Kinesis events and react to them. I’d recently created a POC that leveraged Vert.x’s web socket implementation, so I joined forces with a co-worker who focused on our Kinesis Java consumer. He spun up a Vert.x project and integrated the consumer code. I then integrated the consumers with Vert.x’s event bus, and created a simple HTML page that, via Javascript and web sockets, also integrated with the event bus. Between the two of us, we had within a couple of hours created an application rendered an HTML page, listened to Kafka, and pushed messages to be displayed in real-time in the browser window.

I’ll show you how we did it. Note that I’ve modified our implementation for the purposes of clarity in this article in the following ways:

  • This article uses RabbitMQ rather than Kinesis. The latter is proprietary to AWS, whereas RabbitMQ widely used and easy to spin up and develop prototypes against. While Kinesis is considered an event log and RabbitMQ a message queue, for our purposes their functionality is the same.
  • I’ve removed superfluous code, combined some classes, and abandoned some best practices (e.g. using constants or properties instead of hard-coded strings) to make the code samples easier to follow.

Other than that and the renaming of some classes and packages, the crux of the work remains the same.

The Task at Hand

First, let’s take a look at the overall architecture:



Figure 1

At the center of the server architecture is RabbitMQ. On the one side of RabbitMQ, we have some random service (represented in the diagram by the grey box labelled Some random service) that publishes messages. For our purposes, we don’t care what this service does, other than the fact that it publishes text messages. On the other side, we have our Vert.x service that consumes messages from RabbitMQ. Meanwhile, a user’s Web browser loads an HTML page served up by our Vert.x app, and then opens a web socket to register itself to receive events from Vert.x.

We care mostly what happens within the purple box, which represents the Vert.x service. In the center, you’ll notice the event bus. The event bus is the “nervous system” of any Vert.x application, and allows separate components within an application to communicate with each other. These components communicate via addresses, which are really just names. As shown in the diagram, we’ll use two addresses: service.ui-message and service.rabbit.

The components that communicate via the event bus can be any type of class, and can be written in one of many different languages (indeed, Vert.x supports mixing Java, Javascript, Ruby, Groovy, and Ceylon in a single application). Generally, however, these components are written as what Vert.x calls verticles, or isolated units of business logic that can be deployed in a controlled manner. Our application employs two verticles: RabbitListenerVerticle and RabbitMessageConverterVerticle. The former registers to consume events from RabbitMQ, passing any message it receives to the event bus at the service.rabbit address. The latter receives messages from the event bus at the service.rabbit address, transforms the messages, and publishes them to the service.ui-message address. Thus, RabbitListenerVerticle‘s sole purpose is to consume messages, while RabbitMessageConverterVerticle‘s purpose is to transform those messages; they do their jobs while communicating with each other–and the rest of the application–via the event bus.

Once the transformed message is pushed to the service.ui-message event bus address, Vert.x’s web socket implementation pushes it up to any web browsers that have subscribed with the service. And really… that’s it!

Let’s look at some code.

Maven Dependencies

We use Maven, and so added these dependencies to the project’s POM file:

<dependency>
  <groupId>io.vertx</groupId>
  <artifactId>vertx-core</artifactId>
  <version>3.3.3</version>
</dependency>
<dependency>
  <groupId>io.vertx</groupId>
  <artifactId>vertx-web</artifactId>
  <version>3.3.3</version>
</dependency>
<dependency>
  <groupId>io.vertx</groupId>
  <artifactId>vertx-web-templ-handlebars</artifactId>
  <version>3.3.3</version>
</dependency>
<dependency>
  <groupId>io.vertx</groupId>
  <artifactId>vertx-rabbitmq-client</artifactId>
  <version>3.3.3</version>
</dependency>

The first dependency, vertx-core, is required for all Vert.x applications. The next, vertx-web, provides functionality for handling HTTP requests. vertx-web-templ-handlebars augments enhances vertx-web with Handlebars template rendering. And vertx-rabbitmq-client provides us with our RabbitMQ consumer.

Setting Up the Web Server

Next, we need an entry point for our application.

package com.taubler.bridge;
import io.vertx.core.AbstractVerticle;

public class Main extends AbstractVerticle {

   @Override
   public void start() {
   }

}

When we run our application, we’ll tell Vert.x that this is the main class to launch (Vert.x requires the main class to be a verticle, so we simply extend AbstractVerticle). On startup, Vert.x will create an instance of this class and call its start() method.

Next we’ll want to create a web server that listens on a specified port. The Vert.x-web add-on uses two main classes for this: HttpServer is the actual web server implementation, while Router is a class that allows requests to be routed. We’ll create both in the start() method. Also, although we don’t strictly need one, we’ll use a templating engine. Vert.x-web offers a number of options; we’ll use Handlebars here.

Let’s see what we have so far:

   
TemplateHandler hbsHandler = TemplateHandler.create(HandlebarsTemplateEngine.create());

   @Override
   public void start() {

       HttpServer server = vertx.createHttpServer();
       Router router = new RouterImpl(vertx);
       router.get("/rsc/*").handler(ctx -> {
           String fn = ctx.request().path().substring(1);
           vertx.fileSystem().exists(fn, b -> {
               if (b.result()) {
                   ctx.response().sendFile(fn);
               } else {
                   System.out.println("Couldn’t find " + fn);
                   ctx.fail(404);
               }
           });
       });

       
       String hbsPath = ".+\.hbs";
       router.getWithRegex(hbsPath).handler(hbsHandler);

       router.get("/").handler(ctx -> {
           ctx.reroute(“/index.hbs”);
       });

       // web socket code will go here

       server.requestHandler(router::accept).listen(8080);

   }

Let’s start with, well, the start() method. Creating the server is a simple one-liner: vertx.createHttpServer()  vertx is an instance of io.vertx.core.Vertx, which is a class that is at the core of much of Vert.x’s functionality. Since our Main class extends AbstractVerticle, it inherits the member protected Vertx vertx.

Next, we’ll configure the server. Most of this work is done via a Router, which maps request paths to Handlers that process them and return the correct response. We first create an instance of RouterImpl, passing our vertx instance. This class provides a number of methods to route requests to their associated Handlers, which process the request and provide a response.

Since we’ll be serving up a number of static Javascript and CSS resources, we’ll create that handler first. The router.get(“/rsc/*”) call matches GET requests whose path starts with /rsc/ As part of Router’s fluent interface, it returns our router instance, allowing us to chain the handler() method to it. We pass that method an instance of io.vertx.core.Handler<io.vertx.ext.web.RoutingContext> in the form of a lambda expression. The lambda will look in the filesystem for the requested resource and, if found, return it, or else return a 404 status code.

Next we’ll create our second handler, to serve up dynamic HTML pages. To do this, we configure the router to route any paths that match the hbsPath regular expression, which essentially matches any paths ending in .hbs, to the io.vertx.ext.web.handler.TemplateHandler instance that we’d created as a class variable. Like our lambda above, this is an instance of Handler<RoutingContext>, but it’s written specifically to leverage a templating engine; in our case, a HandlebarsTemplateEngine. Although not strictly needed for this application, this allows us to generate dynamic HTML based on serverside data.

Last, we configure our router to internally redirect requests for / to /index.hbs, ensuring that our TemplateHandler handles them.

Now that our router is configured, we simply pass a reference to its accept() method to our server’s requestHandler() method, then (using HttpServer’s fluent API) invoke server’s listen() method, telling it to listen on port 8080. We now have a Web server listening on port 8080.

Adding Web Sockets

Next, let’s enable web sockets in our app. You’ll notice a comment in the code above; we’ll replace it with the following:

       SockJSHandlerOptions options = new SockJSHandlerOptions().setHeartbeatInterval(2000);
       SockJSHandler sockJSHandler = SockJSHandler.create(vertx, options);
       BridgeOptions bo = new BridgeOptions()
           .addInboundPermitted(new PermittedOptions().setAddress("/client.register"))
           .addOutboundPermitted(new PermittedOptions().setAddress("service.ui-message"));
       sockJSHandler.bridge(bo, event -> {
           System.out.println("A websocket event occurred: " + event.type() + "; " + event.getRawMessage());
           event.complete(true);
       });
       router.route("/client.register" + "/*").handler(sockJSHandler);

Our web client will use the SockJS Javascript library. Doing this makes integrating with Vert.x dirt simple, since Vert.x-web offers a SockJSHandler that does most of the work for you. The first couple of lines above creates one of those. We first create a SockJSHandlerOptions instance to set our preferences. In our case, we tell our implementation to expect a heartbeat from our web client every 2000 milliseconds; otherwise, Vert.x will close the web socket, assuming that the user has closed or navigated away from the web page. We pass our vertx instance along with our options to create a SockJSHandler, called sockJSHandler. Just like our lambda and hbsHandler above, SockJSHandler implements Handler<RoutingContext>

Next, we configure our bridge options. Here, the term “bridge” refers to the connection between our Vert.x server and the web browser. Messages are sent between the two on addresses, much like messages are passed on addresses along the event bus. Our BridgeOptions instance, then, configures on what addresses the browser can sent messages to the server (via the addInboundPermitted() method) and which the server can send to the browser (vai the addOutboundPermitted() method). In our case, we’re saying that the web browser can send messages to the server via the “/client-register” address, while the server can send messages to the browser on the “service.ui-message” address. We configure sockJSHandler’s bridge options by passing it our BridgeOptions instance, as well as a lambda representing a Handler<Event> that provides useful logging for us. Finally, we attach sockJSHandler to our router, listening for HTTP requests that match “/client.register/*”.

Consuming From the Message Queue

That covers the web server part. Let’s shift to our RabbitMQ consumer. First, we’ll write the code that creates our RabbitMQClient instance. This will be done in a RabbitClientFactory class:

public class RabbitClientFactory {

   public static RabbitClientFactory RABBIT_CLIENT_FACTORY_INSTANCE = new RabbitClientFactory();

   private static RabbitMQClient rabbitClient;
   private RabbitClientFactory() {}

   public RabbitMQClient getRabbitClient(Vertx vertx) {
       if (rabbitClient == null) {
           JsonObject config = new JsonObject();
            config.put("uri", "amqp://dbname:password@cat.rmq.cloudamqp.com/dbname");
            rabbitClient = RabbitMQClient.create(vertx, config);
       }
       return rabbitClient;
   }

}

This code should be pretty self explanatory. The one method, getRabbitClient(), creates an instance of a RabbitMQClient, configured with the AMQP URI, assigns it to a static rabbitClient variable, and returns it. As per the typical factory pattern, a singleton instance is also created.

Next, we’ll get an instance of the client and subscribe it to listen for messages. This will be done in a separate verticle:

public class RabbitListenerVerticle extends AbstractVerticle {

   private static final Logger log = LoggerFactory.getLogger(RabbitListenerVerticle.class);

   @Override
   public void start(Future<Void> fut) throws InterruptedException {

       try {
           RabbitMQClient rClient = RABBIT_CLIENT_FACTORY_INSTANCE.getRabbitClient(vertx);
           rClient.start(sRes -> {
               if (sRes.succeeded()) {
                   rClient.basicConsume("bunny.queue", "service.rabbit", bcRes -> {
                       if (bcRes.succeeded()) {
                           System.out.println("Message received: " + bcRes.result());
                       } else {
                           System.out.println("Message receipt failed: " + bcRes.cause());
                       }
                   });
               } else {
                   System.out.println("Connection failed: " + sRes.cause());
               }
           });

           log.info("RabbitListenerVerticle started");
           fut.complete();

       } catch (Throwable t) {
           log.error("failed to start RabbitListenerVerticle: " + t.getMessage(), t);
           fut.fail(t);
       }
   }
}

As with our Main verticle, we implement the start() method (accepting a Future with which we can report our success or failure with starting this verticle). We use the factory to create an instance of a RabbitMQClient, and start it using its start() method. This method takes a Handler<AsyncResult<Void>> instance which we provide as a lambda. If starting the client succeeds, we call its basicConsume() method to register it as a listener. We pass three arguments to that method. The first, “bunny.queue”, is the name of the RabbitMQ queue that we’ll be consuming from. The second, “service.rabbit”, is the name of the Vert.x event bus address to which any messages received by the client (which will be of type JsonObject) will be sent (see Figure 1 for a refresher on this concept). The last argument is again a Handler<AsyncResult<Void>>; this argument is required, but we don’t really need it, so we simply log success or failure messages.

So at this point, we have a listener that received messages from RabbitMQ and immediately sticks them on the event bus. What happens to the messages then?

Theoretically, we could allow those messages to be sent straight to the web browser to handle. However, I’ve found that it’s best to allow the server to format any data in a manner that’s most easily consumed by the browser. In our sample app, we really only care about printing the text contained within the RabbitMQ message. However, the messages themselves are complex objects. So we need a way to extract the text itself before sending it to the browser.

So, we’ll simply create another verticle to handle this:

public class RabbitMessageConverterVerticle extends AbstractVerticle {

   @Override
   public void start(Future<Void> fut) throws InterruptedException {
       vertx.eventBus().consumer("service.rabbit", msg -> {
           JsonObject m = (JsonObject) msg.body();
           if (m.containsKey("body")) {
               vertx.eventBus().publish("service.ui-message", m.getString("body"));
           }
       });
   }
}

There’s not much to it. Again we extend AbstractVerticle and override its start() method. There, we gain access to the event bus by calling vertx.eventBus(), and listen for messages by calling its consumer() method. The first argument indicates the address we’re listening to; in this case, it’s “service.rabbit”, the same address that our RabbitListenerVerticle publishes to. The second argument is a Handler<Message<Object>>. We provide that as a lambda that receives a Message<Object> instance from the event bus. Since we’re listening to the address that our RabbitListenerVerticle publishes to, we know that the Object contained as the Message’s body will be of type JsonObject. So we cast it as such, find its “body” key (not to be confused with the body of the Message<Object> we just received from the event bus), and publish that to the “service.ui-message” event bus channel.

Deploying the Message Queue Verticles

So we have two new verticles designed to get messages from RabbitMQ to the “service-ui.message” address. Now we need to ensure they are started. So we simply add the following code to our Main class:

   
protected void deployVerticles() {
       deployVerticle(RabbitListenerVerticle.class.getName());
       deployVerticle(RabbitMessageConverterVerticle.class.getName());
   }

   protected void deployVerticle(String className) {
       vertx.deployVerticle(className, res -> {
           if (res.succeeded()) {
               System.out.printf("Deployed %s verticle n", className);
           } else {
               System.out.printf("Error deploying %s verticle:%s n", className, res.cause());
           }
       });
   }

Deploying verticles is done by calling deployVerticle() on our Vertx instance. We provide the name of the class, as well as a Handler<AsyncResult<String>>. We create a deployVerticle() method to encapsulate this, and call it to deploy each of RabbitListenerVerticle and RabbitMessageConverterVerticle from within a deployVerticles() method.

Then we add deployVerticles() to Main’s start() method.

HTML and Javascript

Our serverside implementation is done. Now we just need to create our web client. First, we create a basic HTML page, index.bhs, and add it to a templates/ folder within our web root:

<html>
<head>
 <title>Messages</title>
 <link rel="stylesheet" href="/rsc/css/style.css'>
  <script src="https://code.jquery.com/jquery-3.1.0.min.js" integrity="sha256-cCueBR6CsyA4/9szpPfrX3s49M9vUU5BgtiJj06wt/s=" crossorigin=”anonymous”></script>
  
  <script src="http://cdn.jsdelivr.net/sockjs/0.3.4/sockjs.min.js"></script>
  
  <script src="/rsc/js/vertx-eventbus.js"></script>
  
  <script type="text/javascript” src=”/rsc/js/main.js"></script>
</head>

<body>
 <h1>Messages</h1>
 <div id="messages"></div>
</body>
</html>

We’ll leverage the jQuery and sockjs Javascript libraries, so those scripts are imported. We’ll also import three scripts that we’ve placed in a rsc/js/ folder: main.js and websocket.js, which we’ve created, and vertx-eventbus.js, which we’ve downloaded from the Vert.x site. The other important element is a DIV of id messages. This is where the RabbitMQ messages will be displayed.

Let’s look at our main.js file:

var eventBus = null;

var eventBusOpen = false;

function initWs() {
   eventBus = new EventBus(‘/client.register’);
   eventBus.onopen = function () {
     eventBusOpen = true;
     regForMessages();
   };
   eventBus.onerror = function(err) {
     eventBusOpen = false;
   };
}

function regForMessages() {
   if (eventBusOpen) {
      eventBus.registerHandler('service.ui-message', function (error, message) {
           if (message) {
             console.info('Found message: ' + message);
             var msgList = $("div#messages");
             msgList.html(msgList.html() + "<div>" + message.body + "</div>");
           } else if (error) {
console.error(error);
           }        
       });
   } else {
       console.error("Cannot register for messages; event bus is not open");
   }
}

$( document ).ready(function() {
  initWs();
});

function unregister() {
  reg().subscribe(null);
}

initWs() will be called when the document loads, thanks to jQuery’s document.ready() function. It will open a web socket connection on the /client.register channel (permission for which, as you recall, was explicitly granted by our BridgeOptions class).

Once it successfully opens, regForMessages()is invoked. This function invokes the Javascript representation of the Vert.x event bus, registering to listen on the “service.ui-message” address. Vert.x’s sockjs implementation provides the glue between the web socket address and its event bus address. regForMessages()also takes a callback function that accepts a messages, or an error if something went wrong. As with Vert.x event bus messages, each message received will contain a body, which in our case consists of the text to display. Our callback simply extracts the body and appends it to the messages DIV in our document.

Running the Whole Application

That’s it! Now we just need to run our app. First, of course, we need a RabbitMQ instance. You can either download a copy (https://www.rabbitmq.com/download.html) and run it locally, or use a cloud-provider such as Heroku (https://elements.heroku.com/addons/cloudamqp)  Either way, be sure to create a queue called bunny.queue.

Finally, we’ll launch our Vert.x application. Since Vert.x does not require an app server, it’s easy to start up as a regular Java application. The easiest way is to just run it straight within your IDE. Since I use Eclipse, I’ll provide those steps:

  1. Go to the Run -> Run Configurations menu item
  2. Click the New Run Configurations button near the top left of the dialog
  3. In the right-hand panel, give the run configuration a name such as MSGQ UI Bridge
  4. In the Main tab, ensure that the project is set to the one in which you’d created the project
  5. Also in the Main tab, enter io.vertx.core.Launcher as the Main class.
  6. Switch to the arguments tab and add the following as the Program arguments: run com.taubler.bridge.Main –launcher-class=io.vertx.core.Launcher
  7. If you’re using a cloud provider such as Heroku, you might need to provide a system property representing your Rabbit MQ credentials. Do this in the Environment tab.

Once that’s set up, launch the run configuration. The server is listening on port 8080, so pointing a browser to http://localhost:8080 will load the index page:

blog-message-1.png

Next, go to the RabbitMQ management console’s Queues tab. Under the Publish message header, you’ll find controls allowing you to push a sample message to the queue:

blog-rabbit-1.png

Once you’ve one that, head back to the browser window. Your message will be there waiting for you!

blog-message-2.png

That’s it!

I hope this post has both taught you how to work with message queues and web sockets using Vert.x, and demonstrated how easy and fun working with Vert.x can be.

Collection functions

A common use of functional-style programming is applying transformative functions to collections. For example, I might have a List of items, and I want to transform each item a certain way. There are a number of such functions that are commonly found, in one form or another, in languages that allow functional programming. For example, Seq in Scala provides a map() function, while a method of the same name can be found in Java’s Stream class.

Keeping these functions/methods straight can be difficult when starting out, so I thought I’d list out some of the common ones, along with a quick description of their purpose. I’ll use Scala’s implementation primarily, but will try to point out where Java differs. Hopefully the result will be useful for users of other languages as well.

flatten()

Purpose: Takes a list of lists/sequences, and puts each element in each list/sequence into a single list
Result: A single list consisting of all elements
Example:
scala> val letters = List( List(“a”,”e”,”i”,”o”,”u”), List(“b”,”d”,”f”,”h”,”k”,”l”,”t”), List(“c”,”m”,”n”,”r”,”s”,”v”,”w”,”x”,”z”), List(“g”,”j”,”p”,”q”,”y”) )
letters: List[List[String]] = List(List(a, e, i, o, u), List(b, d, f, h, k, l, t), List(c, m, n, r, s, v, w, x, z), List(g, j, p, q, y))
scala> letters.flatten
res19: List[String] = List(a, e, i, o, u, b, d, f, h, k, l, t, c, m, n, r, s, v, w, x, z, g, j, p, q, y)
scala> letters.flatten.sorted
res20: List[String] = List(a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q, r, s, t, u, v, w, x, y, z)

map()

Purpose: Applies a function that transforms each element in a list.
Result: Returns in a list (or stream) consisting of the transformed elements. The resulting elements can be of a different type than the original elements.
Example:
scala> val nums = List(1, 2, 3, 4, 5)
scala> val newNums = nums.map(n => n * 2)
newNums: List[Int] = List(2, 4, 6, 8, 10)
scala> val newStrs = nums.map(n => s”number $n”)
newStrs: List[String] = List(number 1, number 2, number 3, number 4, number 5)

Scala note: Often when working with Futures, you’ll see map() applied to a Future. In this case, the map() method–when provided with a mapping function for the value returned by the Future, returns a new Future that runs once the first Future completes.

flatMap()

Purpose: Takes a sequence/list (A) of smaller sequences/lists (B), applies the provided function to each of those smaller sequences (B), and places the result of each into a single list to be returned. A combination of map and flatten.
Result: Returns a single list (or stream) containing all of the results of applying the provided function to (B).
Example:
scala> val numGroups = List( List(1,2,3), List(11,22,33) )
numGroups: List[List[Int]] = List(List(1, 2, 3), List(11, 22, 33))
scala> numGroups.flatMap( n => n.filter(_ % 2 == 0) )
res8: List[Int] = List(2, 22)

fold()

Purpose: Starts with a single T value (call it v), then takes a List of T and compares each T element to v, creating a new value of v on each iteration. The order of iteration is non-deterministic. Note: This is very similar to the reduce() method in Java streams).
Result: A single T value (which would be the final value of v as described above)
Example:
scala> val nums = List(1,2,3,4,5,6,7,8,9)
nums: List[Int] = List(1, 2, 3, 4, 5, 6, 7, 8, 9)
scala> nums.fold(0) ((a,b) => if (a % 2 == 0) a else b)
res25: Int = 0

foldLeft()

Purpose: Like fold(), always iterating from the left to the right.
Result: A single T value
Example:
scala> val nums = List(1,2,3,4,5,6,7,8,9)
nums: List[Int] = List(1, 2, 3, 4, 5, 6, 7, 8, 9)
scala> nums.foldLeft(-1) ((a,b) => if (a % 2 == 0) a else b)
res27: Int = 2

foldRight()

Purpose: Like fold(), always iterating from the right to the left.
Result: A single T value
Example:
scala> val nums = List(1,2,3,4,5,6,7,8,9)
nums: List[Int] = List(1, 2, 3, 4, 5, 6, 7, 8, 9)
scala> nums.foldRight(-1) ((a,b) => if (b % 2 == 0) b else a)
res28: Int = 8

Business Logic for Scala/Play apps

As I’d mentioned previously, I’m a fairly seasoned Java developer who is making a foray into Scala and its ecosystem (including the Play framework, as well as Akka).

One thing that’s struck me about Play is that there doesn’t seem to be a prescribed pattern for how to handle business logic. This is in contrast to a typical Java EE application (or Spring-based Java web application). With the latter, you’d typically find applications divided into the following:

  • MVC Layer (or Presentation Layer) – This layer would typically contain:
    • UI Models: POJOs designed to transfer values from the controllers to the UI templates
    • Views: files containing markup mixed with UI models, to present data to the user. These would typically by JSPs, or files of some other tempting language such as Thymeleaf, Handlebars, Velocity, etc. They might also be JSON or XML responses in the case of single-page applications
    • Controllers: Classes designed to map requests to business logic classes, and transforming the results into UI Models.
  • Business Layer – This layer would typically contain:
    • Managers and/or Facades: somewhat coarse-grained classes that encapsulate business logic for a given domain; for example, UserFacade, AccountFacade, etc. These classes are typically stateless singletons.
    • Potentially Business Models: POJOs that describe entities from the business’ standpoint.
  • Data Access Layer – This layer would typically contain:
    • DAOs (Data Access Objects): Fine-grained, singleton classes that encapsulate the logic needed to for CRUD operations on database entities
    • Database Entities: POJOs that map to database tables (or Mongo collections, etc, depending on the data source).
By contrast, Play applications seem to typically have only the MVC Layer. Occasionally I’ll see examples with a utils/ folder, but I’ve yet to see any examples with what I would consider an entirely separate business layer or data access layer.
Clearly I could Business and Data Access Layers for my Play application if I wanted to. But part of learning a new framework is not just learning the mechanics, but also the spirit of the framework. So here are a few thoughts I’ve had on the subject:

Rely on Models’ Companion Objects

Scala has the concept of companion objects. A companion object is essentially a singleton instance of a class. For example, I might have a model called User, which looks something like this:
case class User(id: Long, firstName: String, lastName: String email: String)
I would typically create, in the same User.scala file, a companion object like so:
object User {
  def findByEmail(email: String): User = {
    // query the database and return a User
  }
}

As shown above, it’s common for companion objects to contain CRUD operations. So one thought is that we can combine business logic and data access methods in a model’s companion object, treating the companion object as a sort of hybrid manager/DAO.

There are of course a few downsides to this approach. First is that of separation of concerns. If we’re imbuing companion objects with the ability to perform CRUD operations and business logic, then we’ll wind up with large, hard to read companion objects that have multiple responsibilities.

The other downside is that business operations within a companion object would be too fine-grained. Often, business logic spans multiple entities. Trying to choose which entity’s companion object should contain a specific business rule can become cumbersome. For example, say I want to update a phone number. Surely, that functionality belongs in a PhoneNumber object. But… what if my business stipulates that a phone number can only contain an area code that corresponds to its owner’s mailing address? Suddenly, my PhoneNumber object must communicate with a User and MailingAddress object.

Use Middleware for Business Logic

As a Java engineer, I’m using to my business logic being encapsulated within stateless Spring beans. These beans exist in the Spring container (i.e. application context). They are injected into, for example, controllers, who in turn invoke methods on the bean to cause some business operation to concern.

Play ships with Akka our of the box. So I wonder… would a framework like Akka suffice? Presumably I can create an actor hierarchy for each business operation, thus keeping the operations centralized and well-defined. I’m just delving into Akka, so I’m not sure how viable of a solution that would be. My sense is that, at best, I’d be misusing Akka somewhat with this approach. Moreover, I suspect I’m trying to shoehorn a Spring-application paradigm into a Play application.

Let Aggregate Roots Define Business Logic

I’ve coincidentally been reading a lot Martin Fowler‘s blog posts. One idea of his that seems to be picking up traction is that anemic entities–those who are little more than getters and setters–are an anti-pattern. Couple this with the concept of aggregates and aggregate roots presented in Eric Evans’ Domain Driven Design, and I think I might be on the best solution.

The basic premise is that, unlike the layered architecture I described above, with Manager/Facade classes, the domain entities themselves should perform their own business logic. Furthermore, entities should be arranged into aggregates. An aggregate is essentially a group of domain entities that logically form one unit. Furthermore, one of these entities should be the root, to which all other entities are attached. Modifications should only be done through that root.

In my example above about Users, PhoneNumbers and MailingAddresses, those entities would be arranged as an aggregate. User would be the root entity; after all, PhoneNumbers and MailingAddresses don’t simply exist on their own, but rather are associated with a person (or organization). To modify a User’s phone number, I would go through the User rather than directly modifying the PhoneNumber. For example, something like this:

var user: User = Users.findById(123)
user.phoneNumber = “415-555-1212”

rather than this:

var pn: PhoneNumber = PhoneNumbers.findById(789)
ph.value = “415-555-1212

Using the former approach, my User instance can ensure data integrity; e.g. ensuring that the provided area code is valid.

This, then, may be the best option:

  1. Companion objects handle CRUD data access operations
  2. Entities themselves–organized into aggregates–handle their own business rules
  3. No separate business logic stereotype is called out

Scaling Scala – part 1

It’s time to explore Scala. I still enjoy Java programming, and that’s still what I do for a living. But I have to admit that Scala is intriguing. Plus, it’s good to learn new languages every so often. And as they say, Java is more than a language; it’s also a platform and an ecosystem, one that Scala fits very well into.

I’ve gone through different books and tutorials, but the best way to learn a language is to come up with a task and figure out how to do it. I’ve decided that while I’m learning Scala, I’ll also learn the Play! framework. My task will be a microservice whose purpose is to authenticate users. Although my current employer doesn’t use Scala (at least not directly, although we do use Kafka, Akka, and other technologies written in Scala) my plan is to write a service that could be used by the company. That way I won’t be tempted to cut corners.

As I develop this project, I’ll post the solutions to any issues I encounter along the way. I figure there are probably enough Scala noobs out there who might encounter the same problems. I also figure that some of these issues might be very elementary to developers who are more experienced with Play! and Scala. In that vein, any corrections or suggestions are most welcome!

Maven Repositories (and the Oracle driver)

I started a few days ago, and hit two issues rather quickly. The first stems from my company’s use of Oracle as its RDBMS; that’s where we store user credentials. So of course I would need to read from Oracle in order to authenticate users.

As I understand it, Play! makes use of SBT (the Simple Build Tool), which is developed by Typesafe, and is used to manage Play! projects (and other Scala-based frameworks). SBT is analogous to Maven for pure Java projects. In fact, SBT makes use of Ivy for dependency management; Ivy, in turn, makes use of Maven repositories.

So I figured I’d need to pull the Oracle JDBC driver from Maven Central. Play! projects are created with a build.sbt file in the project’s root directory, and that’s where dependencies are listed. We use ojdbc6 (the Oracle JDBC driver for Java 6), so our POM entry looks like this:

<dependency>
    <groupId>com.oracle</groupId>
    <artifactId>ojdbc6</artifactId>
    <version>11.2.0.3</version>
</dependency>

To add that to build.sbt, it would look like this:

libraryDependencies += "com.oracle" %% "ojdbc6" % "11.2.0.3"

I added that to build.sbt, and was confronted with errors stating that that dependency couldn’t be resolved. Turns out that due to licensing reasons, Oracle does not allow its JDBC driver in any public repos. So the solution is to simply download the JAR file from Oracle, create a lib/ directory in the Play! project’s root, and add the JAR there.

Admittedly, that was as much an issue of the ojdbc6 driver than with Play! itself, but I thought it worth documenting here.

Artifactory (or Nexus, if you prefer)

Next up was the issue of artifacts that my company produces. Much of our code is encapsulated in common JAR files and, of course, hosted in our own internal (Artifactory) repository. Specifically, the domain class that I would be pulling from the database contains, among other fields, and enum called Type (yes, I know… that name was not my doing!) which was located in a commons module. I could’ve created a Scala Enumeration for my project, or just skipped the field, but I wanted to demonstrate the interoperability between new Scala projects and our existing Java code.

So I’d have to point SBT to the correct location to find the commons module. I found bits and pieces online on how to do it; here’s the solution that I ultimately pieced together:

(1) SBT had already create a ~/.sbt/0.13/ directory for me. In there, I created a plugins/ subdirectory and with there a credentials.sbt file with these contents:

credentials += Credentials(Path.userHome / “.ivy2” / “.credentials”)

(2) Within the existing ~/.ivy2/ directory, created a .credentials file with these contents:

realm=[Artifactory Realm]
host=company.domain.com
user=[username]
password=[password]

(3) Add the repository location in ~/.sbt.repositories like so:

my-maven-proxy-releases: https://artifactory.tlcinternal.com/artifactory/LC_CENTRAL

(4) Added the following line in ~/.sbtconfig to tell SBT where to find the credentials:

export SBT_CREDENTIALS=”$HOME/.ivy2/.credentials”

I’m not sure why we need both step 1 and 4, but both seem to be required.

Once I restarted my Play! application (this was one case where hot-deployment didn’t seem to work) I was able to use the commons module in my Play! app.