CANWEALLAGREETHATWEWILLAIMFORCLARITYASWENAMECLASSES? CanWeAllAgreeThatWeWillAimForClarityAsWeNameClasses?

I like to be productive and efficient. So anything that unnecessarily wastes even just a few moments of my time… well, it really bugs me.

That is why I am particularly bugged by a particular practice (nay, an anti-practice) that I see too often. FYI, AFAIK most folks would agree with me, so IDK why this anti-practice is so pervasive.

I’m talking about CamelCased class names with capitalized acronyms.

A camel.
A camel. See if you can tell how this animal inspired the term “CamelCase”.

You see these fairly often in Java. EOFException. URLEncoder. ISBNValidator. Mercifully, the good folks who designed the java.net package decided to use Http rather than HTTP in class names, so we’re not cursed with the likes of HTTPURLConnection. Not so with Apple, however, as anyone who’s worked with Objective-C, or even Swift, can attest. The two-to-three-uppercase-letter classname prefix standard (for example, NSThis, NSThat, NSTheOther) is bad enough. But when you couple that with Apple’s overzealous penchant for capitalizing acronyms, you wind up reading AFJSONRequestOperations from your NSHTTPURLResponses.

A strange camel
The weird-ass camel that some language designers have apparently once seen.

SMH.

Though they’re rare, I’ve encountered programmers who voraciously favor the all-caps approach. The rationale is usually that it is, simply, proper grammar. And that’s fair enough. But guess what? It’s also proper grammar to put spaces between words, and to lowercase the first letter of non-proper nouns (unless your nouns are German). Yet we programmers happily break those grammatical decrees on a daily basis.

Instead of being GrammarSticklers, we should be crafting class names to convey the classes’ meanings as clearly and quickly as possible. It’s not that these class names are illegible. It’s simply that they are less legible than they should be. It takes an extra beat or two to understand them. And I don’t know about you, but I don’t like wasting beats.

Am I overreacting? IDK, maybe I am. But thanks for bearing with me anyway. I won’t waste any more of your time.

Collections and Encapsulation in Java Never, ever return null when you're supposed to be returning a Collection.

A core tenet of object oriented programming is encapsulation: callers should not have access to the inner workings of a class. This is something that newer languages such as Kotlin, Swift, and Ceylon have solved well with first-class properties.

Java—having been around for quite awhile—does not have the concept of first-class properties. Instead, the JavaBeans spec was introduced as Java’s method of enforcing encapsulation. Writing JavaBeans means that you need to make your class’s fields private, exposing them only via getter and setter methods.

If you’re like me, you’ve often felt when writing JavaBeans that you were writing a bunch of theoretical boilerplate that rarely served any practical purpose. Most of my JavaBeans have consisted of private fields, and their corresponding getters and setters that do nothing more than, well, get and set those private fields. More than once I’ve been tempted to simply make the fields public and dispense with the getter/setter fanfare, at least until a stern warning from the IDE sent me back, tail between my legs, to the JavaBeans standard.

Recently, though, I’ve realized that encapsulation and the JavaBean/getter/setter pattern is quite useful in a common scenario: Collection-type fields. How so? Let’s fabricate a simple class:

public class MyClass {

    private List<String> myStrings;

}

We have a field—a List of Strings—called myStrings, which is encapsulated in MyClass. Now, we need to provide accessor methods:

public class MyClass {

    private List<String> myStrings;

    public void setMyStrings(List<String> s) {
        this.myStrings = s;
    }

    public List<String> getMyStrings() {
        return this.myStrings;
    }

}

Here we have a properly-encapsulated—if not verbose—class. So we’ve done good, right? Hold that thought.

Optional lessons

Consider the Optional class, introduced in Java 8. If you’ve done much work with Optionals, you’ve probably heard the mantra that you should never return null from a method that returns an Optional. Why? Consider the following contrived example:

public class Foo {

    private String bar;

    public Optional<String> getBar() {
        return (bar == null) ? null : Optional.of(bar);
    }

}

Now clients can use the method thusly:

foo.getBar().ifPresent(log::info);

and risk throwing a NullPointerException. Alternatively, they could perform a null check:

if (foo.getBar() != null) {
    foo.getBar().ifPresent(log::info);
}

Of course, doing that defeats the very purpose of Optionals. In fact, it so defeats the purpose of Optionals that it’s become standard practice that any API that returns Optional will never return a null value.

Back to Collections. Much like an Optional contains either none or one, a Collection contains either none or some. And much like Optionals, there should be no reason to return null Collections (except maybe in rare, specialized cases, of which I can’t currently think of any). Simply return an empty (zero-sized) Collection to indicate the lack of any elements.

Sock drawer
The drawer still exists even when all the socks are removed, right?

It’s for this reason that it’s becoming more common to ensure that methods that return Collection types (including arrays) never return null values, the same as methods that return Optional types. Perhaps you or your organization have already adopted this rule in writing new code. If not, you should. After all, would you (or your clients) rather do this?:

boolean isUnique = personDao.getPersonsByName(name).size() == 1;

Or have your code littered with the likes of this? :

List<Person> persons = personDao.getPersonsByName(name);
boolean isUnique = (persons == null) ? false :persons.size() == 1;

So how does this relate to encapsulation?

Keeping Control of our Collections

Back to our MyClass class. As it is, an instance of MyClass could easily return null from the getMyStrings() method; in fact, a fresh instance would do just that. So, to adhere to our new never-return-a-null-Collection guideline, we need to fix that:

public class MyClass {

    private List<String> myStrings = new ArrayList<>();

    public void setMyStrings(List<String> s) {
        this.myStrings = s;
    }

    public List<String> getMyStrings() {
        return this.myStrings;
    }

}

Problem solved? Not exactly. Any client could call aMyClass.setMyStrings(null), in which case we’re back to square one.

At this point, encapsulation sounds like a practical—rather than solely theoretical—concept. Let’s expand the setMyStrings() method:

public void setMyStrings(List<String> s) {
    if (s == null) {
        this.myStrings.clear();
    } else {
        this.myStrings = s;
    }
}

Now, even when null is passed to the setter, myStrings will retain a valid reference (in the example here, we take null to mean that the elements should be cleared out). And of course, calling aMyClass.getMyStrings() = null will have no effect on aMyClass’ underlying myStrings variable. So are we all done?

Er, well, sort of. We could stop here. But really, there’s more we should do.

Consider that we are replacing our private ArrayList with the List passed to us by the caller. This has two problems: first, we no longer know the exact List implementation used by myStrings. In theory, this shouldn’t be a problem, right? Well, consider this:

myClass.setMyStrings(Collections.unmodifiableList("Heh, gotcha!"));

So if we ever update MyClass such that it attempts to modify the contents of myStrings, bad things can start happening at runtime.

The second problem is that the caller retains a reference to our underlying List. So now, that caller can now directly manipulate our List.

What we should be doing is storing the elements passed to us in the ArrayList to which myStrings was initialized. While we’re at it, let’s really embrace encapsulation. We should be hiding the internals of our class from outside callers. The reality is that callers of our classes shouldn’t care whether there’s an underlying List, or Set, or array, or some runtime dynamic code-generation voodoo, that’s storing the Strings that we pass to it. All they should know is that Strings are being stored somehow. So let’s update the setMyStrings() method thusly:

public void setMyStrings(Collection<String> s) {
    this.myStrings.clear(); 
    if (s != null) { 
        this.myStrings.addAll(s); 
    } 
}

This has the effect of ensuring that myStrings ends up with the same elements contained within the input parameter (or is empty if null is passed), while ensuring that the caller doesn’t have a reference to myStrings.

Now that myStrings‘ reference can’t be changed, let’s just make it a constant:

public class MyClass {
    private final List<String> myStrings = new ArrayList<>();
    ...
}

While we’re at it, we shouldn’t be returning our underlying List via our getter. That too would leave the caller with a direct reference to myStrings. To remedy this, recall the “defensive copy” mantra that Effective Java beat into our heads (or, at least, should have):

public List<String> getMyStrings() {
    // depending on what, exactly, we want to return
    return new ArrayList<>(this.myStrings);  
}

At this point, we have a well-encapsulated class that eliminates the need for null-checking whenever its getter is called. We have, however, taken some control away from our clients. Since they no longer have direct access to our underlying List, they can no longer, say, add or remove individual Strings. 

No problem. If we can simply add methods like

public void addString(String s) {
    this.myStrings.add(s);
}

and

public void removeString(String s) { 
    this.myStrings.remove(s); 
}

Might our callers need to add multiple Strings at once to a MyClass instance? That’s fine as well:

public void addStrings(Collection<String> c) {
    if (c != null) {
        this.myStrings.addAll(c);
    }
}

And so on…

public void clearStrings() {
    this.myStrings.clear();
}

public void replaceStrings(Collection<String> c) {
    clearStrings();
    addStrings(c); 
}

Collecting our thoughts

Here, then is what our class might ultimately look like:

public class MyClass {

    private final List<String> myStrings = new ArrayList<>();

    public void setMyStrings(Collection<String> s) {
        this.myStrings.clear(); 
        if (s != null) { 
            this.myStrings.addAll(s); 
        } 
    }

    public List<String> getMyStrings() {
        return new ArrayList<>(this.myStrings);
    }

    public void addString(String s) { 
        this.myStrings.add(s); 
    }

    public void removeString(String s) { 
        this.myStrings.remove(s); 
    }

    // And maybe a few more helpful methods...

}

With this, we’ve achieved a class that:

  • is still basically a POJO that conforms to the JavaBean spec
  • fully encapsulates its private member(s)

and most importantly ensures that its method that returns a Collection always does just that–returns a Collection–and never returns null.

 

 

Using Vert.x to Connect the Browser to a Message Queue

Introduction

Like many of my software-engineering peers, my experience has been rooted in traditional Java web applications, leveraging Java EE or Spring stacks running in a web application container such as Tomcat or Jetty. While this model has served well for most projects I’ve been involved in, recent trends in technology–among them, microservices, reactive UIs and systems, and the so-called Internet of Things–have piqued my interest in alternative stacks and server technologies.

Happily, the Java world has kept up with these trends and offers a number of alternatives. One of these, the Vert.x project, captured my attention a few years ago. Plenty of other articles extol the features of Vert.x, which include its event loop and non-blocking APIs, verticles, event bus, etc. But above all of those, I’ve found in toying around with my own Vert.x projects that the framework is simply easy to use and fun.

So naturally I was excited when one of my co-workers and I recognized a business case that Vert.x was ideally suited for. Our company is a Java and Spring shop that is moving from a monolithic to a microservices architecture. Key to our new architecture is the use of an event log. Services publish their business operations to the event log, and other services consume them and process them as necessary. Since our company’s services are hosted on Amazon Web Services, we’re using Kinesis as our event log implementation.

Our UI team has expressed interest in being able to enable our web UIs to listen for Kinesis events and react to them. I’d recently created a POC that leveraged Vert.x’s web socket implementation, so I joined forces with a co-worker who focused on our Kinesis Java consumer. He spun up a Vert.x project and integrated the consumer code. I then integrated the consumers with Vert.x’s event bus, and created a simple HTML page that, via Javascript and web sockets, also integrated with the event bus. Between the two of us, we had within a couple of hours created an application rendered an HTML page, listened to Kafka, and pushed messages to be displayed in real-time in the browser window.

I’ll show you how we did it. Note that I’ve modified our implementation for the purposes of clarity in this article in the following ways:

  • This article uses RabbitMQ rather than Kinesis. The latter is proprietary to AWS, whereas RabbitMQ widely used and easy to spin up and develop prototypes against. While Kinesis is considered an event log and RabbitMQ a message queue, for our purposes their functionality is the same.
  • I’ve removed superfluous code, combined some classes, and abandoned some best practices (e.g. using constants or properties instead of hard-coded strings) to make the code samples easier to follow.

Other than that and the renaming of some classes and packages, the crux of the work remains the same.

The Task at Hand

First, let’s take a look at the overall architecture:



Figure 1

At the center of the server architecture is RabbitMQ. On the one side of RabbitMQ, we have some random service (represented in the diagram by the grey box labelled Some random service) that publishes messages. For our purposes, we don’t care what this service does, other than the fact that it publishes text messages. On the other side, we have our Vert.x service that consumes messages from RabbitMQ. Meanwhile, a user’s Web browser loads an HTML page served up by our Vert.x app, and then opens a web socket to register itself to receive events from Vert.x.

We care mostly what happens within the purple box, which represents the Vert.x service. In the center, you’ll notice the event bus. The event bus is the “nervous system” of any Vert.x application, and allows separate components within an application to communicate with each other. These components communicate via addresses, which are really just names. As shown in the diagram, we’ll use two addresses: service.ui-message and service.rabbit.

The components that communicate via the event bus can be any type of class, and can be written in one of many different languages (indeed, Vert.x supports mixing Java, Javascript, Ruby, Groovy, and Ceylon in a single application). Generally, however, these components are written as what Vert.x calls verticles, or isolated units of business logic that can be deployed in a controlled manner. Our application employs two verticles: RabbitListenerVerticle and RabbitMessageConverterVerticle. The former registers to consume events from RabbitMQ, passing any message it receives to the event bus at the service.rabbit address. The latter receives messages from the event bus at the service.rabbit address, transforms the messages, and publishes them to the service.ui-message address. Thus, RabbitListenerVerticle‘s sole purpose is to consume messages, while RabbitMessageConverterVerticle‘s purpose is to transform those messages; they do their jobs while communicating with each other–and the rest of the application–via the event bus.

Once the transformed message is pushed to the service.ui-message event bus address, Vert.x’s web socket implementation pushes it up to any web browsers that have subscribed with the service. And really… that’s it!

Let’s look at some code.

Maven Dependencies

We use Maven, and so added these dependencies to the project’s POM file:

<dependency>
  <groupId>io.vertx</groupId>
  <artifactId>vertx-core</artifactId>
  <version>3.3.3</version>
</dependency>
<dependency>
  <groupId>io.vertx</groupId>
  <artifactId>vertx-web</artifactId>
  <version>3.3.3</version>
</dependency>
<dependency>
  <groupId>io.vertx</groupId>
  <artifactId>vertx-web-templ-handlebars</artifactId>
  <version>3.3.3</version>
</dependency>
<dependency>
  <groupId>io.vertx</groupId>
  <artifactId>vertx-rabbitmq-client</artifactId>
  <version>3.3.3</version>
</dependency>

The first dependency, vertx-core, is required for all Vert.x applications. The next, vertx-web, provides functionality for handling HTTP requests. vertx-web-templ-handlebars augments enhances vertx-web with Handlebars template rendering. And vertx-rabbitmq-client provides us with our RabbitMQ consumer.

Setting Up the Web Server

Next, we need an entry point for our application.

package com.taubler.bridge;
import io.vertx.core.AbstractVerticle;

public class Main extends AbstractVerticle {

   @Override
   public void start() {
   }

}

When we run our application, we’ll tell Vert.x that this is the main class to launch (Vert.x requires the main class to be a verticle, so we simply extend AbstractVerticle). On startup, Vert.x will create an instance of this class and call its start() method.

Next we’ll want to create a web server that listens on a specified port. The Vert.x-web add-on uses two main classes for this: HttpServer is the actual web server implementation, while Router is a class that allows requests to be routed. We’ll create both in the start() method. Also, although we don’t strictly need one, we’ll use a templating engine. Vert.x-web offers a number of options; we’ll use Handlebars here.

Let’s see what we have so far:

   
TemplateHandler hbsHandler = TemplateHandler.create(HandlebarsTemplateEngine.create());

   @Override
   public void start() {

       HttpServer server = vertx.createHttpServer();
       Router router = new RouterImpl(vertx);
       router.get("/rsc/*").handler(ctx -> {
           String fn = ctx.request().path().substring(1);
           vertx.fileSystem().exists(fn, b -> {
               if (b.result()) {
                   ctx.response().sendFile(fn);
               } else {
                   System.out.println("Couldn’t find " + fn);
                   ctx.fail(404);
               }
           });
       });

       
       String hbsPath = ".+\.hbs";
       router.getWithRegex(hbsPath).handler(hbsHandler);

       router.get("/").handler(ctx -> {
           ctx.reroute(“/index.hbs”);
       });

       // web socket code will go here

       server.requestHandler(router::accept).listen(8080);

   }

Let’s start with, well, the start() method. Creating the server is a simple one-liner: vertx.createHttpServer()  vertx is an instance of io.vertx.core.Vertx, which is a class that is at the core of much of Vert.x’s functionality. Since our Main class extends AbstractVerticle, it inherits the member protected Vertx vertx.

Next, we’ll configure the server. Most of this work is done via a Router, which maps request paths to Handlers that process them and return the correct response. We first create an instance of RouterImpl, passing our vertx instance. This class provides a number of methods to route requests to their associated Handlers, which process the request and provide a response.

Since we’ll be serving up a number of static Javascript and CSS resources, we’ll create that handler first. The router.get(“/rsc/*”) call matches GET requests whose path starts with /rsc/ As part of Router’s fluent interface, it returns our router instance, allowing us to chain the handler() method to it. We pass that method an instance of io.vertx.core.Handler<io.vertx.ext.web.RoutingContext> in the form of a lambda expression. The lambda will look in the filesystem for the requested resource and, if found, return it, or else return a 404 status code.

Next we’ll create our second handler, to serve up dynamic HTML pages. To do this, we configure the router to route any paths that match the hbsPath regular expression, which essentially matches any paths ending in .hbs, to the io.vertx.ext.web.handler.TemplateHandler instance that we’d created as a class variable. Like our lambda above, this is an instance of Handler<RoutingContext>, but it’s written specifically to leverage a templating engine; in our case, a HandlebarsTemplateEngine. Although not strictly needed for this application, this allows us to generate dynamic HTML based on serverside data.

Last, we configure our router to internally redirect requests for / to /index.hbs, ensuring that our TemplateHandler handles them.

Now that our router is configured, we simply pass a reference to its accept() method to our server’s requestHandler() method, then (using HttpServer’s fluent API) invoke server’s listen() method, telling it to listen on port 8080. We now have a Web server listening on port 8080.

Adding Web Sockets

Next, let’s enable web sockets in our app. You’ll notice a comment in the code above; we’ll replace it with the following:

       SockJSHandlerOptions options = new SockJSHandlerOptions().setHeartbeatInterval(2000);
       SockJSHandler sockJSHandler = SockJSHandler.create(vertx, options);
       BridgeOptions bo = new BridgeOptions()
           .addInboundPermitted(new PermittedOptions().setAddress("/client.register"))
           .addOutboundPermitted(new PermittedOptions().setAddress("service.ui-message"));
       sockJSHandler.bridge(bo, event -> {
           System.out.println("A websocket event occurred: " + event.type() + "; " + event.getRawMessage());
           event.complete(true);
       });
       router.route("/client.register" + "/*").handler(sockJSHandler);

Our web client will use the SockJS Javascript library. Doing this makes integrating with Vert.x dirt simple, since Vert.x-web offers a SockJSHandler that does most of the work for you. The first couple of lines above creates one of those. We first create a SockJSHandlerOptions instance to set our preferences. In our case, we tell our implementation to expect a heartbeat from our web client every 2000 milliseconds; otherwise, Vert.x will close the web socket, assuming that the user has closed or navigated away from the web page. We pass our vertx instance along with our options to create a SockJSHandler, called sockJSHandler. Just like our lambda and hbsHandler above, SockJSHandler implements Handler<RoutingContext>

Next, we configure our bridge options. Here, the term “bridge” refers to the connection between our Vert.x server and the web browser. Messages are sent between the two on addresses, much like messages are passed on addresses along the event bus. Our BridgeOptions instance, then, configures on what addresses the browser can sent messages to the server (via the addInboundPermitted() method) and which the server can send to the browser (vai the addOutboundPermitted() method). In our case, we’re saying that the web browser can send messages to the server via the “/client-register” address, while the server can send messages to the browser on the “service.ui-message” address. We configure sockJSHandler’s bridge options by passing it our BridgeOptions instance, as well as a lambda representing a Handler<Event> that provides useful logging for us. Finally, we attach sockJSHandler to our router, listening for HTTP requests that match “/client.register/*”.

Consuming From the Message Queue

That covers the web server part. Let’s shift to our RabbitMQ consumer. First, we’ll write the code that creates our RabbitMQClient instance. This will be done in a RabbitClientFactory class:

public class RabbitClientFactory {

   public static RabbitClientFactory RABBIT_CLIENT_FACTORY_INSTANCE = new RabbitClientFactory();

   private static RabbitMQClient rabbitClient;
   private RabbitClientFactory() {}

   public RabbitMQClient getRabbitClient(Vertx vertx) {
       if (rabbitClient == null) {
           JsonObject config = new JsonObject();
            config.put("uri", "amqp://dbname:password@cat.rmq.cloudamqp.com/dbname");
            rabbitClient = RabbitMQClient.create(vertx, config);
       }
       return rabbitClient;
   }

}

This code should be pretty self explanatory. The one method, getRabbitClient(), creates an instance of a RabbitMQClient, configured with the AMQP URI, assigns it to a static rabbitClient variable, and returns it. As per the typical factory pattern, a singleton instance is also created.

Next, we’ll get an instance of the client and subscribe it to listen for messages. This will be done in a separate verticle:

public class RabbitListenerVerticle extends AbstractVerticle {

   private static final Logger log = LoggerFactory.getLogger(RabbitListenerVerticle.class);

   @Override
   public void start(Future<Void> fut) throws InterruptedException {

       try {
           RabbitMQClient rClient = RABBIT_CLIENT_FACTORY_INSTANCE.getRabbitClient(vertx);
           rClient.start(sRes -> {
               if (sRes.succeeded()) {
                   rClient.basicConsume("bunny.queue", "service.rabbit", bcRes -> {
                       if (bcRes.succeeded()) {
                           System.out.println("Message received: " + bcRes.result());
                       } else {
                           System.out.println("Message receipt failed: " + bcRes.cause());
                       }
                   });
               } else {
                   System.out.println("Connection failed: " + sRes.cause());
               }
           });

           log.info("RabbitListenerVerticle started");
           fut.complete();

       } catch (Throwable t) {
           log.error("failed to start RabbitListenerVerticle: " + t.getMessage(), t);
           fut.fail(t);
       }
   }
}

As with our Main verticle, we implement the start() method (accepting a Future with which we can report our success or failure with starting this verticle). We use the factory to create an instance of a RabbitMQClient, and start it using its start() method. This method takes a Handler<AsyncResult<Void>> instance which we provide as a lambda. If starting the client succeeds, we call its basicConsume() method to register it as a listener. We pass three arguments to that method. The first, “bunny.queue”, is the name of the RabbitMQ queue that we’ll be consuming from. The second, “service.rabbit”, is the name of the Vert.x event bus address to which any messages received by the client (which will be of type JsonObject) will be sent (see Figure 1 for a refresher on this concept). The last argument is again a Handler<AsyncResult<Void>>; this argument is required, but we don’t really need it, so we simply log success or failure messages.

So at this point, we have a listener that received messages from RabbitMQ and immediately sticks them on the event bus. What happens to the messages then?

Theoretically, we could allow those messages to be sent straight to the web browser to handle. However, I’ve found that it’s best to allow the server to format any data in a manner that’s most easily consumed by the browser. In our sample app, we really only care about printing the text contained within the RabbitMQ message. However, the messages themselves are complex objects. So we need a way to extract the text itself before sending it to the browser.

So, we’ll simply create another verticle to handle this:

public class RabbitMessageConverterVerticle extends AbstractVerticle {

   @Override
   public void start(Future<Void> fut) throws InterruptedException {
       vertx.eventBus().consumer("service.rabbit", msg -> {
           JsonObject m = (JsonObject) msg.body();
           if (m.containsKey("body")) {
               vertx.eventBus().publish("service.ui-message", m.getString("body"));
           }
       });
   }
}

There’s not much to it. Again we extend AbstractVerticle and override its start() method. There, we gain access to the event bus by calling vertx.eventBus(), and listen for messages by calling its consumer() method. The first argument indicates the address we’re listening to; in this case, it’s “service.rabbit”, the same address that our RabbitListenerVerticle publishes to. The second argument is a Handler<Message<Object>>. We provide that as a lambda that receives a Message<Object> instance from the event bus. Since we’re listening to the address that our RabbitListenerVerticle publishes to, we know that the Object contained as the Message’s body will be of type JsonObject. So we cast it as such, find its “body” key (not to be confused with the body of the Message<Object> we just received from the event bus), and publish that to the “service.ui-message” event bus channel.

Deploying the Message Queue Verticles

So we have two new verticles designed to get messages from RabbitMQ to the “service-ui.message” address. Now we need to ensure they are started. So we simply add the following code to our Main class:

   
protected void deployVerticles() {
       deployVerticle(RabbitListenerVerticle.class.getName());
       deployVerticle(RabbitMessageConverterVerticle.class.getName());
   }

   protected void deployVerticle(String className) {
       vertx.deployVerticle(className, res -> {
           if (res.succeeded()) {
               System.out.printf("Deployed %s verticle n", className);
           } else {
               System.out.printf("Error deploying %s verticle:%s n", className, res.cause());
           }
       });
   }

Deploying verticles is done by calling deployVerticle() on our Vertx instance. We provide the name of the class, as well as a Handler<AsyncResult<String>>. We create a deployVerticle() method to encapsulate this, and call it to deploy each of RabbitListenerVerticle and RabbitMessageConverterVerticle from within a deployVerticles() method.

Then we add deployVerticles() to Main’s start() method.

HTML and Javascript

Our serverside implementation is done. Now we just need to create our web client. First, we create a basic HTML page, index.bhs, and add it to a templates/ folder within our web root:

<html>
<head>
 <title>Messages</title>
 <link rel="stylesheet" href="/rsc/css/style.css'>
  <script src="https://code.jquery.com/jquery-3.1.0.min.js" integrity="sha256-cCueBR6CsyA4/9szpPfrX3s49M9vUU5BgtiJj06wt/s=" crossorigin=”anonymous”></script>
  
  <script src="http://cdn.jsdelivr.net/sockjs/0.3.4/sockjs.min.js"></script>
  
  <script src="/rsc/js/vertx-eventbus.js"></script>
  
  <script type="text/javascript” src=”/rsc/js/main.js"></script>
</head>

<body>
 <h1>Messages</h1>
 <div id="messages"></div>
</body>
</html>

We’ll leverage the jQuery and sockjs Javascript libraries, so those scripts are imported. We’ll also import three scripts that we’ve placed in a rsc/js/ folder: main.js and websocket.js, which we’ve created, and vertx-eventbus.js, which we’ve downloaded from the Vert.x site. The other important element is a DIV of id messages. This is where the RabbitMQ messages will be displayed.

Let’s look at our main.js file:

var eventBus = null;

var eventBusOpen = false;

function initWs() {
   eventBus = new EventBus(‘/client.register’);
   eventBus.onopen = function () {
     eventBusOpen = true;
     regForMessages();
   };
   eventBus.onerror = function(err) {
     eventBusOpen = false;
   };
}

function regForMessages() {
   if (eventBusOpen) {
      eventBus.registerHandler('service.ui-message', function (error, message) {
           if (message) {
             console.info('Found message: ' + message);
             var msgList = $("div#messages");
             msgList.html(msgList.html() + "<div>" + message.body + "</div>");
           } else if (error) {
console.error(error);
           }        
       });
   } else {
       console.error("Cannot register for messages; event bus is not open");
   }
}

$( document ).ready(function() {
  initWs();
});

function unregister() {
  reg().subscribe(null);
}

initWs() will be called when the document loads, thanks to jQuery’s document.ready() function. It will open a web socket connection on the /client.register channel (permission for which, as you recall, was explicitly granted by our BridgeOptions class).

Once it successfully opens, regForMessages()is invoked. This function invokes the Javascript representation of the Vert.x event bus, registering to listen on the “service.ui-message” address. Vert.x’s sockjs implementation provides the glue between the web socket address and its event bus address. regForMessages()also takes a callback function that accepts a messages, or an error if something went wrong. As with Vert.x event bus messages, each message received will contain a body, which in our case consists of the text to display. Our callback simply extracts the body and appends it to the messages DIV in our document.

Running the Whole Application

That’s it! Now we just need to run our app. First, of course, we need a RabbitMQ instance. You can either download a copy (https://www.rabbitmq.com/download.html) and run it locally, or use a cloud-provider such as Heroku (https://elements.heroku.com/addons/cloudamqp)  Either way, be sure to create a queue called bunny.queue.

Finally, we’ll launch our Vert.x application. Since Vert.x does not require an app server, it’s easy to start up as a regular Java application. The easiest way is to just run it straight within your IDE. Since I use Eclipse, I’ll provide those steps:

  1. Go to the Run -> Run Configurations menu item
  2. Click the New Run Configurations button near the top left of the dialog
  3. In the right-hand panel, give the run configuration a name such as MSGQ UI Bridge
  4. In the Main tab, ensure that the project is set to the one in which you’d created the project
  5. Also in the Main tab, enter io.vertx.core.Launcher as the Main class.
  6. Switch to the arguments tab and add the following as the Program arguments: run com.taubler.bridge.Main –launcher-class=io.vertx.core.Launcher
  7. If you’re using a cloud provider such as Heroku, you might need to provide a system property representing your Rabbit MQ credentials. Do this in the Environment tab.

Once that’s set up, launch the run configuration. The server is listening on port 8080, so pointing a browser to http://localhost:8080 will load the index page:

blog-message-1.png

Next, go to the RabbitMQ management console’s Queues tab. Under the Publish message header, you’ll find controls allowing you to push a sample message to the queue:

blog-rabbit-1.png

Once you’ve one that, head back to the browser window. Your message will be there waiting for you!

blog-message-2.png

That’s it!

I hope this post has both taught you how to work with message queues and web sockets using Vert.x, and demonstrated how easy and fun working with Vert.x can be.

Collection functions

A common use of functional-style programming is applying transformative functions to collections. For example, I might have a List of items, and I want to transform each item a certain way. There are a number of such functions that are commonly found, in one form or another, in languages that allow functional programming. For example, Seq in Scala provides a map() function, while a method of the same name can be found in Java’s Stream class.

Keeping these functions/methods straight can be difficult when starting out, so I thought I’d list out some of the common ones, along with a quick description of their purpose. I’ll use Scala’s implementation primarily, but will try to point out where Java differs. Hopefully the result will be useful for users of other languages as well.

flatten()

Purpose: Takes a list of lists/sequences, and puts each element in each list/sequence into a single list
Result: A single list consisting of all elements
Example:
scala> val letters = List( List(“a”,”e”,”i”,”o”,”u”), List(“b”,”d”,”f”,”h”,”k”,”l”,”t”), List(“c”,”m”,”n”,”r”,”s”,”v”,”w”,”x”,”z”), List(“g”,”j”,”p”,”q”,”y”) )
letters: List[List[String]] = List(List(a, e, i, o, u), List(b, d, f, h, k, l, t), List(c, m, n, r, s, v, w, x, z), List(g, j, p, q, y))
scala> letters.flatten
res19: List[String] = List(a, e, i, o, u, b, d, f, h, k, l, t, c, m, n, r, s, v, w, x, z, g, j, p, q, y)
scala> letters.flatten.sorted
res20: List[String] = List(a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q, r, s, t, u, v, w, x, y, z)

map()

Purpose: Applies a function that transforms each element in a list.
Result: Returns in a list (or stream) consisting of the transformed elements. The resulting elements can be of a different type than the original elements.
Example:
scala> val nums = List(1, 2, 3, 4, 5)
scala> val newNums = nums.map(n => n * 2)
newNums: List[Int] = List(2, 4, 6, 8, 10)
scala> val newStrs = nums.map(n => s”number $n”)
newStrs: List[String] = List(number 1, number 2, number 3, number 4, number 5)

Scala note: Often when working with Futures, you’ll see map() applied to a Future. In this case, the map() method–when provided with a mapping function for the value returned by the Future, returns a new Future that runs once the first Future completes.

flatMap()

Purpose: Takes a sequence/list (A) of smaller sequences/lists (B), applies the provided function to each of those smaller sequences (B), and places the result of each into a single list to be returned. A combination of map and flatten.
Result: Returns a single list (or stream) containing all of the results of applying the provided function to (B).
Example:
scala> val numGroups = List( List(1,2,3), List(11,22,33) )
numGroups: List[List[Int]] = List(List(1, 2, 3), List(11, 22, 33))
scala> numGroups.flatMap( n => n.filter(_ % 2 == 0) )
res8: List[Int] = List(2, 22)

fold()

Purpose: Starts with a single T value (call it v), then takes a List of T and compares each T element to v, creating a new value of v on each iteration. The order of iteration is non-deterministic. Note: This is very similar to the reduce() method in Java streams).
Result: A single T value (which would be the final value of v as described above)
Example:
scala> val nums = List(1,2,3,4,5,6,7,8,9)
nums: List[Int] = List(1, 2, 3, 4, 5, 6, 7, 8, 9)
scala> nums.fold(0) ((a,b) => if (a % 2 == 0) a else b)
res25: Int = 0

foldLeft()

Purpose: Like fold(), always iterating from the left to the right.
Result: A single T value
Example:
scala> val nums = List(1,2,3,4,5,6,7,8,9)
nums: List[Int] = List(1, 2, 3, 4, 5, 6, 7, 8, 9)
scala> nums.foldLeft(-1) ((a,b) => if (a % 2 == 0) a else b)
res27: Int = 2

foldRight()

Purpose: Like fold(), always iterating from the right to the left.
Result: A single T value
Example:
scala> val nums = List(1,2,3,4,5,6,7,8,9)
nums: List[Int] = List(1, 2, 3, 4, 5, 6, 7, 8, 9)
scala> nums.foldRight(-1) ((a,b) => if (b % 2 == 0) b else a)
res28: Int = 8

Before You Use SpringData and MongoDB

The Upfront Summary

For those who don’t have time to read a long blog post, here’s the gist of this article: always always always annotate your persisted SpringData entity classes with @Document(collection=”<custom-collection-name>”) and @TypeAlias(“<custom-type>”) . This should be an unbreakable rule. Otherwise you’ll be opening yourself up to a world of hurt later.

SpringData is Easy to Get Started With

Like many Java developers, I rely on the Spring Framework. Everything, from my data access layer to my MVC controllers are managed within a Spring application context. So when I decided to add MongoDB to the mix, it was without a second thought that I decided to use SpringData to interact with Mongo.

That was months ago, and I’ve run into a few problems. As it turns out, these particular problems were easy to solve, but it took awhile to recognize what was going on and come up with a solution. Surprisingly little information existed on StackOverflow or the Spring forums for what I’m imagining is a common problem.

Let me explain.

My Data Model

My application is basically an editor. Think of a drawing program, where users can edit a multi-page “drawing” document. Within a drawing’s page, users can create and manipulate different shapes. As a document store, MongoDB is well-suited for persisting this sort of data. Roughly speaking, my data model was something like this (excuse the lack of UML):

  • Drawings are the top-level container
  • A Drawing has one or more Pages
  • A Page consists of many Shapes.
  • Shape is an abstract class. It has some properties shared by all Shape subclasses, such as size, border with and color, background color, etc
  • Concrete subclasses of Shape can contain additional properties. For example, Star has number of points, inner radius, outer radius, etc

Drawing are stored separately then pages; i.e. they are not nested. Shapes, however, are nested within Pages. For example, here’s a snippet from the Drawing class:

public class Drawing extends BaseDocument {

    @Id 
    private String id;

    // ….


}

and the Page class:

public class Page extends BaseDocument {

    @Id 
    private String id;

    @Indexed
    private String drawingId;

    private List<Shape> shapes = new ArrayList<Shape>();

    // ….


}

So in other words, when a user goes to edit a given drawing, we simply retrieve all of the Pages whose drawingId matches the ID of the drawing being edited.

Don’t Accept SpringData’s Defaults!

SpringData offers you the ability to customize how your entities are persisted in your datastore. But if you don’t explicitly customize, SpringData will make do as best as it can. While that might seem helpful up front, I’ve found the opposite. SpringData’s default behavior will invariably paint you into a corner that’s difficult to get out of. I’d argue that, rather than guessing, SpringData should throw a runtime exception at startup. Short of that, every tutorial about SpringData/MongoDB should strongly encourage developers of production applications to tell Spring how to persist their data.

The first default is how SpringData maps classes to collections. Collections are how many NoSQL data stores, MongoDB included, store groups of related data. Although it’s not always appropriate to compare NoSQL databases to traditional RDBMs, you can roughly think of a collection the same way you think of a table in a SQL database.

Chapter 7 of the SpringData/Mongo docs explains how, by default classes are mapped to collections:

The short Java class name is mapped to the collection name in the following manner. The class ‘com.bigbank.SavingsAccount‘ maps to ‘savingsAccount‘ collection name.

So based on my data model, I knew I’d find a drawing collection and a page collection in my MongoDB instance.

Now, I’ve used ORMs like Hibernate extensively. Probably for that reason, I wasn’t content to let my Mongo collections be named for me. So I looked for a way to specify my collection names.

The answer was simple enough. Although not a strict requirement, persisted entities should be annotated with the @org.springframework.data.mongodb.core.mapping.Document annotation. Furthermore, that annotation takes a collection argument in which you can pass your desired collection name.

So my Drawing class became annotated with @Document(collection=”drawing”), and my Page class became annotated with @Document(collection=”page”). The end result would be the same–a drawing and a page collection in Mongo–but I now had control. I specified the collection name simply because it made me feel more comfortable, but it turns out there’s an important, tangible reason to do so (more on that later).

With everything in place, I started testing my app. I created a few drawings, added some pages and shapes, and saved them all to MongoDB. Then I used the mongo command-line tool to examine my data. One thing immediately stuck out. Every document stored in there had a _class property which pointed to the fully-qualified name of the mapped class. For example, each Page document contained the property “_class” : “com.myapp.documents.Page”.

The purpose of that value, as you might guess, is to instruct Spring Data how to map the document back to a class when reading data out. This convention struck me as a little concerning. After all, my application might be pure Java at this point, but my data store should be treated as language-agnostic. Why would I want Java-specific metadata associated with my data?

After thinking about it, I shook off my concern. Sure, the _class property would be there on every record, but if I started using another platform to access the data, then the property could just be ignored. Practically speaking, what could actually go wrong?

What Could Go Wrong

Then one day I decided to refactor my entire application. I’d been organizing my code based on architectural layer, and I decided instead to try organizing it by feature instead. Eclipse of course allowed me to do this in a matter of minutes. So I WARred up my changes, deployed them to Tomcat, and viola! I could no longer read in any of my drawing/page/shape data.

It quickly became clear what the problem was. My data contained _class information that pointed to a now-non-existence fully-qualified class name. Shape was no longer in the com.myapp.documents package.

With the problem identified, what was the solution?

Making it Right

As mentioned above, SpringData offers the @TypeAlias annotation. Annotating a document as such and providing a value tells Spring to store that value, rather than the fully-qualified classname, as the document’s _class parameter.

So here’s what I did:

@Document(collection=”page”)
@TypeAlias(“myapp.page”)
public class Page extends BaseDocument {
    // …
}

Of course, I still couldn’t read any of my existing data in, but moving forward, any new data would be refactor-proof. Fortunately my app was nowhere near production-ready at this point, so deleting all of my old drawings and starting with new ones was no problem. If that wasn’t an option, then I figure I’d have two options:

  1. Change the @TypeAlias value to match the old, fully-qualified class name, rather than the generic myapp.page value. Of course I’d be stuck with a confusing, language-specific value at that point.
  2. Go through each of the affected collections in my MongoDB store and update their _class values to the new, generic aliases. Certainly possible, although a bit scary for my taste as a MongoDB newbie.

One additional improvement could be made at this point. The property in the MongoDB documents is still called _class, but now that’s a bit of a misnomer. I’d prefer something like, well, _alias. This is easy to change. Somewhere in your XML or Java config, you’ve probably specified a DefaultMongoTypeMapper. Simply pass a new typeKey value in the constructor. For example, here’s a snippet from my XML config:

  <bean id=”mongoTypeMapper” class=”org.springframework.data.mongodb.core.convert.DefaultMongoTypeMapper”>
<constructor-arg name=”typeKey” value=”_alias”></constructor-arg>

</bean>

Are We All Set?

It turns out that I immediately ran into another problem. This one is a bit more specific to my particular application, so I’ll describe it in my next article.

What Happens When You Add a Table Cell to a UITableView?

Seasons change. People change. And user interfaces change, usually dynamically.

One dynamic change often seen in iOS apps is the addition or removal of a table cell to a UITableView. Often, you’ll see this when switching a UITableView from read-only to edit mode (for a dramatic example, try editing a contact in the Contacts app.) I’ve done this a few times in some of my iOS apps. I’ve found a few little tutorials out there that describe generally how to do this, but most contain some information gaps. Here, I’ll try to add some of the insight I gained.

I’ll use code examples from my most recent such app. In this app, I have a “Task Details” UITableView in which the user can view the details of a task (think: todo list item). They can also tap an Edit button to edit those details.

Tasks can contain, among other things, a detailed description. Whenever you’re editing a task, a large-ish UITableViewCell containing a UITextView is present, allowing the user to enter or modify a description. However, when in read-only mode, I wanted to remove the UITableViewCell that displays the task’s description if the task in fact has no description (otherwise, it’s wasted space).

The top part of the table view when we’re just looking at the Task. Note that the “Go Shopping” task has no description.

 

Tap the edit button, and the description cell appears. In the real app, of course, it’s animated.

One other important detail for our discussion is that tasks also have simple names. When in read-only mode, of course, I use a standard UITableViewCell (of style UITableViewCellStyleDefault) to display the task’s name. When in edit mode, however, I want to replace that default cell with a custom one containing a UITextField, in which the user can edit the task’s name.

The first thing I did was to create a method called isTaskDescriptionRow:

 

– (BOOL)isTaskDescriptionRow:(int) row {
    if (tableView.editing) {
        return row == TASK_BASICS_DESC_ROW;
    } else {
        if (taskDetailsCellIsShowing) {
            return row == TASK_BASICS_DESC_ROW;
        } else {
            return NO;
        }
    }
}
taskDetailsCellIsShowing is a BOOL that should be self-explanatory. It gets set during the setEditing: animated: call, as shown a little ways below.
Next, I added code like the following to tableView: cellForRowAtIndexPath:
 
if (row == TASK_BASICS_NAME_ROW) {
            if (tableView.editing) {
                // create and return a table view cell containing
                // a UITextField for editing the task’s name
            } else {
            // create and return a default table view cell
                // displaying the task’s name
            }

 

} else if ([self isTaskDescriptionRow:row]) {
            UITableViewCell *cell = [[UITableViewCell alloc]
                        initWithStyle: UITableViewCellStyleDefault 
                        reuseIdentifier: nil];
            …
            [descTextView release];
            descTextView = [[UITextView alloc]
                        initWithFrame:CGRectMake(0, 0, w, h)];
            // set descTextView to be non-editable and non-opaque
            descTextView.text = task.description;
            descTextView.userInteractionEnabled = !self.editing;
            [cell.contentView addSubview: descTextView];
            return cell;
} else…
Delegating to isTaskDescriptionRow: makes it easy to determine whether the current NSIndexPath should display the current task’s description. isTaskDescriptionRow: will return YES for the appropriate NSIndexPath if the UITableView is currently in edit mode, or if the current task has a description. If so, then I create a UITextView, configure it, and add it to the current table cell.
As I’d mentioned, when the current task has no description, I want the description row to be added when the table view is entering editing mode, and removed when reverting to read-only mode. This is done in the setEditing: animated: method:
– (void)setEditing:(BOOL)editing animated:(BOOL)animated {
        [super setEditing:editing animated:YES];
        [tableView setEditing:editing animated:YES];
    
        if (editing) {
                if (![self taskHasDescription]) {
                    [self addDescriptionRow];
                } else {
                     [tableView reloadData];
                }
        } else {
               if (![self taskHasDescription]) {
                    [self removeDescriptionRow];
                } else {
                    [tableView reloadData];
                 }
        }
    
        [self.navigationItem setHidesBackButton:editing animated:YES];
        …
}
After invoke setEditing: animated: on the super class as well as the UITableView itself, I check to see if in fact we’re entering editing mode (e.g. if editing = YES). If so, and if the current task does not have a description (i.e. if ![self taskHasDescription]), I invoke my addDescriptionRow method which I will show below. If we’re leaving editing mode, and the current task does not have a description, then I invoke removeDescriptionRow, also shown below:
-(void)addDescriptionRow {
    [tableView beginUpdates];
    taskDetailsCellIsShowing = YES;
    NSIndexPath *idxPath = 
        [NSIndexPath indexPathForRow:TASK_BASICS_DESC_ROW
                        inSection:TASK_BASICS_SECTION];
    NSArray *idxPaths = [NSArray arrayWithObject:idxPath];
    [tableView insertRowsAtIndexPaths:idxPaths
        withRowAnimation:UITableViewRowAnimationFade];
    [tableView endUpdates];
}
 
-(void)removeDescriptionRow {
    [tableView beginUpdates];
    taskDetailsCellIsShowing = NO;
    NSIndexPath *idxPath = 
        [NSIndexPath indexPathForRow:TASK_BASICS_DESC_ROW
                        inSection:TASK_BASICS_SECTION];
    NSArray *idxPaths = [NSArray arrayWithObject:idxPath];
    [tableView deleteRowsAtIndexPaths:idxPaths
        withRowAnimation:UITableViewRowAnimationFade];
    [tableView endUpdates];
}

 

The basic approach in the add and remove methods shown above is to create an array of indexPaths representing the cell(s) to be added or removed. You then pass that array to either insertRowsAtIndexPaths: withRowAnimation: or deleteRowsAtIndexPaths: withRowAnimation:. Finally, Apple’s documentation encourages insertions/deletions to be wrapped within beginUpdates and endUpdates calls, so I do that as well.
That all worked, for the most part; the description cell appeared as desired. But I noticed a bit of a wrinkle. The first tablecell–the one which displays the task’s name–is supposed to transform into an editable UITextField when the Edit button is tapped. But that wasn’t happening.
You can see in the code blocks above that this should be done in tableView: cellForRowAtIndexPath:. When the indexPath represents the task name row (section == TASK_BASICS_SECTION, row == TASK_BASICS_NAME_ROW), I then simply check the value of tableView.editing. If YES, I create a UITextField and add it to the UITableViewCell; if NO, I simply return a default UITableViewCell. Therefore, all I need to do is to add a call to tableView.reloadData in the setEditing: animated: method, right?
Wrong. When I did that, I received the following error:
Terminating app due to uncaught exception ‘NSInternalInconsistencyException’, reason: ‘Invalid update: invalid number of rows in section 0.  The number of rows contained in an existing section after the update (3) must be equal to the number of rows contained in that section before the update (3), plus or minus the number of rows inserted or deleted from that section (1 inserted, 0 deleted).’
Hmm… time to look at my tableView: numberOfRowsInSection: method :
– (NSInteger)tableView:(UITableView *)tv numberOfRowsInSection:(NSInteger)section {
    if (section == TASK_BASICS_SECTION) {
        return [self numTaskDetailsBasicSectionTableRows];
    } else …
    }
}
Which delegates to this method:
– (int)numTaskDetailsBasicSectionTableRows {
    if (tableView.isEditing) {
        return NUM_TASK_DETAILS_BASIC_TABLE_ROWS;
    } else {
        if (taskDetailsCellIsShowing) {
            return NUM_TASK_DETAILS_BASIC_TABLE_ROWS;
        } else {
            return NUM_TASK_DETAILS_BASIC_TABLE_ROWS – 1;
        }
    }
}

 

NUM_TASK_DETAILS_BASIC_TABLE_ROWS represents the number of rows in that top section when the description cell is being shown (i.e. 3). At the point where I call tableView.reloadData, however, it appears that the description cell hasn’t actually been added yet, at least as far as UIKit was concerned.
This led to my primary question: what exactly is UIKit doing when it adds a table view cell? Clearly, it needs to obtain information about the cell being added. This is especially apparent in this case, since the new cell is an atypical, custom cell, both in terms of its content and its height. Yet because the task name cell wasn’t being updated, it didn’t appear to be invoking tableView: cellForRowAtIndexPath: at any point while adding the new cell. So I set a breakpoint at the top of tableView: cellForRowAtIndexPath: to test my theory.
It turns out, UIKit was calling tableView: cellForRowAtIndexPath:. But only once. Specifically, it was calling it for the indexPath I provided in the addDescriptionRow method; i.e. the indexPath of the new description cell.
Pretty clever, actually.
But in my case, it was preventing that task-name cell from being updated. Thus, I still needed to call tableView.reloadData, but I needed to do it after the table cell addition was complete. At this point, I started scouring the documentation, as well as various online discussion forums. What I was looking for was some way to register a callback to be invoked when the table cell was completely added. Unfortunately, I couldn’t find any.
I played around a bit with reordering certain calls and and performing some calls asynchronously. I was able to get the functionality to work, but the animation wound up being far from smooth–quite jarring, in fact. I was also able to get the functionality to work without animation at all, but I really wanted the animation effect. I finally settled on the following:
– (void)setEditing:(BOOL)editing animated:(BOOL)animated {
    … all the other stuff
    if (![self taskHasDescription]) {
        [self performSelector:@selector(reloadNameCell:) 
                   withObject:[NSNumber numberWithInt:0]
                   afterDelay:0.33];
    }
}
 
– (void)reloadNameCell:(NSNumber *)count {
    @try {
        NSIndexPath *ip = [NSIndexPath
                indexPathForRow:TASK_BASICS_NAME_ROW
                        inSection:TASK_BASICS_SECTION];
        [tableView reloadRowsAtIndexPaths:
                [NSArray arrayWithObject:ip]
                withRowAnimation:UITableViewRowAnimationNone];
    }
    @catch (NSException *exception) {
        int i = [count intValue];
        NSLog(@”Exception (%d) in reloading name cell, %@”, 
                        i, [exception description]);
        if (i < 5) {
            [self performSelector:@selector(reloadNameCell:) 
                       withObject:[NSNumber numberWithInt:i + 1]
                       afterDelay:0.125];
        } else {
            NSLog(@”Too many retries; aborting”);
        }
    }
}
If that looks like a hack, that’s because it is. Basically, we wait for 0.33 seconds, which seems to be just enough time for the call-adding animation to complete. We then invoke reloadNameCell: which at its core simply reloads the UITableViewCell that corresponds with the task name. We are also prepared to catch the abovementioned NSInternalInconsistencyException if we didn’t quite wait long enough. If we do catch that exception, we wait again for a short period of time and then try again. We track the number of retries and abort after 5 attempts; at that point, something really went wrong.
So there you have it. Hopefully that provides a little bit of insight as to what goes on under the hood while your UITableView’s structure is smoothly changing.

jQuery Mobile, AJAX, and the DOM

I’ve been messing around with jQuery and jQuery Mobile lately, and ran into an issue that’s worth exploring. It might be a newbie issue–I’m sure seasoned JQM developers don’t fall into this trap much–but I figure that I might be able to save new developers some frustration.

Let me first describe the app I was working on. The app basically provides online utilities that I and other engineers might find useful from time to time in day to day work. The utilities are grouped into different categories (for example, Date Conversion tools, Data Encryption tools, etc). While I started out developing these tools for desktop browsers, I figured I also ought to port them to a mobile web app as well. While the desktop tools are all displayed on one single page, however, the mobile app would present each category of tools (e.g. Date Conversion) on its own page. The “homepage” would present a simple listview, allowing the user to select a category of tools to use.

The homepage is contained in its own HTML page (named, unsurprisingly, index.html). Each tool category also has its own page; for example, the Date Conversion tools are located on dc.html. It was with this page that I ran into problems. The primary date conversion tool is one that takes milliseconds and converts them to a formatted date, and vice-versa. Given this, I’d decided to allow users to choose the date format they want to work with. To implement this, I used a multi-page template in dc.html  The first page div consisted of the tool itself; the second div consisted of a group of radio buttons from which the user could select a date format.

Stripped of irrelevant code, dc.html looks like this:

<!DOCTYPE html>
<html>
<head>

function addDfChoices() {
        // This method populates the radio buttons. Its existence becomes important later
}
$(document).ready(function() {
        …
        addDfChoices();
        …
});

</head>
<body>


<!– Start of main page –>
<div data-role=”page” id=”main”>

  <div data-role=”content” id=”main”>
    <span class=”section_content”>
      <label style=”text-align:baseline”>Millis since 1970</label>
      <a href=”#page_dateformat” style=”width:100px;float:right”>Format</a>
      
    </span>
  </div><!– /main content –>
</div><!– /main page –>


<div data-role=”page” id=”page_dateformat”>
  …
  <div data-role=”content”>
    <div data-role=”fieldcontain” id=”df_option_container”>
      <fieldset data-role=”controlgroup” id=”df_options”>
      </fieldset>
    </div>
  </div><!– /dateformat content –>
</div><!– /dateformat page –>


</body>
</html>

I loaded dc.html into my browser (I did a lot of testing in Chrome, but also of course on mobile browsers such as mobile Safari on my iPhone). The page worked as expected. On page load, the main page div was displayed, while a small Javascript instantiated different date formats and inserted them into the df_options fieldset located within the hidden page_dateformat page div. Tapping the Format link invoked a slide animation which loaded the radio buttons. Additional Javascript code (not shown above) registered any radio button selections and accordingly modified the date format used in the main page.
Except… sometimes things didn’t work. Specifically, sometimes tapping on the Format link took me to the homepage instead.
Playing around a bit, I learned that if I manually loaded dc.html into my browser, things worked fine. If I started on index.html and then linked to dc.html, and then tapped the Format link, that’s when things went wrong.
This is the relevant code within index.html:
<ul data-role=”listview” data-inset=”true” data-theme=”d”>
  <li><a href=”./dc.html”>Date Conversion</a></li>
  <li><a href=”./de.html”>Data Encoding</a></li>
</ul>
Nothing too special there.
Well, I decided to use Chrome’s DOM inspector to get a look at what was happening. I loaded dc.html and looked in the DOM inspector, and indeed I saw the two major page divs, main and page_dateformat. Then I loaded index.html and tapped on the Date Conversion link, and then took a look again at the DOM inspector after the date conversion tools page loaded. There was the main page div… but where was the page_dateformat div?
Here’s what jQuery was doing. When dc.html itself was loaded, the entire contents of the HTML file were loaded into the DOM. Since I was using a multi-page template, any div declaring data-role=”page” besides the first one (in my case, the page_dateformat div) was hidden… but it was still there in the DOM. The DOM looked roughly like this:
html
  head/
  body
    div data-role=”page” id=”main”/
    div data-role=”page” id=”page_dateformat” display=”hidden”/
  /body
/html
Now, when I started by loading index.html and then tapping the Date Conversion link, dc.html was retrieved but only the first data-role=”page” div was loaded into the DOM. Here, roughly, was the DOM at this point:
html
  head/
  body

    div data-role=”page”/  

            ^– this was the content div for index.html; it’s hidden at this point

    div data-role=”page” id=”main”/  
            ^– this is the first data-role=”page” div in dc.html, loaded via AJAX then inserted into the DOM
  /body
/html
Thus, when I tapped on the Format link (which, if you recall is marked up as <a href=”#page_dateformat”>Format</a>), the browser searched the DOM for a block with an ID of page_dateformat, couldn’t find one, and effectively gave up and reloaded the current page… which at this point was still index.html
At this point I reviewed the jQuery Mobile docs. The following little blurb provides a bit of a clue:

It’s important to note if you are linking from a mobile page that was loaded via Ajax to a page that contains multiple internal pages, you need to add a rel="external" or data-ajax="false" to the link. This tells the framework to do a full page reload to clear out the Ajax hash in the URL. This is critical because Ajax pages use the hash (#) to track the Ajax history, while multiple internal pages use the hash to indicate internal pages so there will be conflicts in the hash between these two modes.

I added data-ajax=”false” to the links on index.html  This solved the problem; dc.html was subsequently loaded completely into the browser’s DOM, allowing the page_dateformat div to be found. However, declaring data-ajax=”false” causes links to lose their animation effects. The pages are simply loaded the old-fashioned way.
I wasn’t too keen about that, so I removed the data-ajax=”false” declarations. I then changed the approach I took to the date format selection. Instead of a new “page” with radio buttons, I simply presented the user with a popup select list. For various reasons, I kept the dfChoices() Javascript function depicted above; the method was now invoked on page load to populate the select with date format options.
And immediately, I encountered the same success rate: when I started on dc.html, things worked great. When I started on index.html and linked to dc.html, the dfChoices() function was never invoked. And of course, the reason was the same as well. Much as the page_dateformat div hadn’t been loaded into the DOM, the dfChoices() function similarly was nowhere to be found.
At this point, I added data-ajax=”false”  back into the homepage’s links; for that matter, I also went back to using the multi-page template with radio buttons. There are certainly other solutions (separating out the radio buttons into their own html page and using some form of local storage to share state? putting Javascript functions into data-role=”page” divs instead of in the page header?) worth exploring, which I’ll do at some point. But just understanding the general issue is a good starting point.

Changing the cell type of an NSTableColumn

This one keeps biting me, although the solution is really simple. I’ve added NSTables to NIBs before, and by default, the table cells are all of type NSTextFieldCell. Well, in some cases I want a column to be represented by a different type of cell. Fortunately, Cocoa provides a number of different types of cells, such as NSButtonCell, NSPopupButtonCell, NSComboBoxCell, NSImageCell, NSSliderCell, etc. Unfortunately, how to change a column’s cell type from the default NSTextFieldCell to one of the others isn’t so apparent.

I’ve had to do this a few times, and usually I poke around Interface Builder to see what controls it provides; first while selecting the table column, then the table column cell, then the table itself. Nothing. And then I remember: all you need to do is to find the specific cell type in IB’s Object Library, and drag it and drop it on top of the column itself (the actual column itself, not the representation in the Objects list). Viola!