Before You Use SpringData and MongoDB

The Upfront Summary

For those who don’t have time to read a long blog post, here’s the gist of this article: always always always annotate your persisted SpringData entity classes with @Document(collection=”<custom-collection-name>”) and @TypeAlias(“<custom-type>”) . This should be an unbreakable rule. Otherwise you’ll be opening yourself up to a world of hurt later.

SpringData is Easy to Get Started With

Like many Java developers, I rely on the Spring Framework. Everything, from my data access layer to my MVC controllers are managed within a Spring application context. So when I decided to add MongoDB to the mix, it was without a second thought that I decided to use SpringData to interact with Mongo.

That was months ago, and I’ve run into a few problems. As it turns out, these particular problems were easy to solve, but it took awhile to recognize what was going on and come up with a solution. Surprisingly little information existed on StackOverflow or the Spring forums for what I’m imagining is a common problem.

Let me explain.

My Data Model

My application is basically an editor. Think of a drawing program, where users can edit a multi-page “drawing” document. Within a drawing’s page, users can create and manipulate different shapes. As a document store, MongoDB is well-suited for persisting this sort of data. Roughly speaking, my data model was something like this (excuse the lack of UML):

  • Drawings are the top-level container
  • A Drawing has one or more Pages
  • A Page consists of many Shapes.
  • Shape is an abstract class. It has some properties shared by all Shape subclasses, such as size, border with and color, background color, etc
  • Concrete subclasses of Shape can contain additional properties. For example, Star has number of points, inner radius, outer radius, etc

Drawing are stored separately then pages; i.e. they are not nested. Shapes, however, are nested within Pages. For example, here’s a snippet from the Drawing class:

public class Drawing extends BaseDocument {

    @Id 
    private String id;

    // ….


}

and the Page class:

public class Page extends BaseDocument {

    @Id 
    private String id;

    @Indexed
    private String drawingId;

    private List<Shape> shapes = new ArrayList<Shape>();

    // ….


}

So in other words, when a user goes to edit a given drawing, we simply retrieve all of the Pages whose drawingId matches the ID of the drawing being edited.

Don’t Accept SpringData’s Defaults!

SpringData offers you the ability to customize how your entities are persisted in your datastore. But if you don’t explicitly customize, SpringData will make do as best as it can. While that might seem helpful up front, I’ve found the opposite. SpringData’s default behavior will invariably paint you into a corner that’s difficult to get out of. I’d argue that, rather than guessing, SpringData should throw a runtime exception at startup. Short of that, every tutorial about SpringData/MongoDB should strongly encourage developers of production applications to tell Spring how to persist their data.

The first default is how SpringData maps classes to collections. Collections are how many NoSQL data stores, MongoDB included, store groups of related data. Although it’s not always appropriate to compare NoSQL databases to traditional RDBMs, you can roughly think of a collection the same way you think of a table in a SQL database.

Chapter 7 of the SpringData/Mongo docs explains how, by default classes are mapped to collections:

The short Java class name is mapped to the collection name in the following manner. The class ‘com.bigbank.SavingsAccount‘ maps to ‘savingsAccount‘ collection name.

So based on my data model, I knew I’d find a drawing collection and a page collection in my MongoDB instance.

Now, I’ve used ORMs like Hibernate extensively. Probably for that reason, I wasn’t content to let my Mongo collections be named for me. So I looked for a way to specify my collection names.

The answer was simple enough. Although not a strict requirement, persisted entities should be annotated with the @org.springframework.data.mongodb.core.mapping.Document annotation. Furthermore, that annotation takes a collection argument in which you can pass your desired collection name.

So my Drawing class became annotated with @Document(collection=”drawing”), and my Page class became annotated with @Document(collection=”page”). The end result would be the same–a drawing and a page collection in Mongo–but I now had control. I specified the collection name simply because it made me feel more comfortable, but it turns out there’s an important, tangible reason to do so (more on that later).

With everything in place, I started testing my app. I created a few drawings, added some pages and shapes, and saved them all to MongoDB. Then I used the mongo command-line tool to examine my data. One thing immediately stuck out. Every document stored in there had a _class property which pointed to the fully-qualified name of the mapped class. For example, each Page document contained the property “_class” : “com.myapp.documents.Page”.

The purpose of that value, as you might guess, is to instruct Spring Data how to map the document back to a class when reading data out. This convention struck me as a little concerning. After all, my application might be pure Java at this point, but my data store should be treated as language-agnostic. Why would I want Java-specific metadata associated with my data?

After thinking about it, I shook off my concern. Sure, the _class property would be there on every record, but if I started using another platform to access the data, then the property could just be ignored. Practically speaking, what could actually go wrong?

What Could Go Wrong

Then one day I decided to refactor my entire application. I’d been organizing my code based on architectural layer, and I decided instead to try organizing it by feature instead. Eclipse of course allowed me to do this in a matter of minutes. So I WARred up my changes, deployed them to Tomcat, and viola! I could no longer read in any of my drawing/page/shape data.

It quickly became clear what the problem was. My data contained _class information that pointed to a now-non-existence fully-qualified class name. Shape was no longer in the com.myapp.documents package.

With the problem identified, what was the solution?

Making it Right

As mentioned above, SpringData offers the @TypeAlias annotation. Annotating a document as such and providing a value tells Spring to store that value, rather than the fully-qualified classname, as the document’s _class parameter.

So here’s what I did:

@Document(collection=”page”)
@TypeAlias(“myapp.page”)
public class Page extends BaseDocument {
    // …
}

Of course, I still couldn’t read any of my existing data in, but moving forward, any new data would be refactor-proof. Fortunately my app was nowhere near production-ready at this point, so deleting all of my old drawings and starting with new ones was no problem. If that wasn’t an option, then I figure I’d have two options:

  1. Change the @TypeAlias value to match the old, fully-qualified class name, rather than the generic myapp.page value. Of course I’d be stuck with a confusing, language-specific value at that point.
  2. Go through each of the affected collections in my MongoDB store and update their _class values to the new, generic aliases. Certainly possible, although a bit scary for my taste as a MongoDB newbie.

One additional improvement could be made at this point. The property in the MongoDB documents is still called _class, but now that’s a bit of a misnomer. I’d prefer something like, well, _alias. This is easy to change. Somewhere in your XML or Java config, you’ve probably specified a DefaultMongoTypeMapper. Simply pass a new typeKey value in the constructor. For example, here’s a snippet from my XML config:

  <bean id=”mongoTypeMapper” class=”org.springframework.data.mongodb.core.convert.DefaultMongoTypeMapper”>
<constructor-arg name=”typeKey” value=”_alias”></constructor-arg>

</bean>

Are We All Set?

It turns out that I immediately ran into another problem. This one is a bit more specific to my particular application, so I’ll describe it in my next article.

Eclipse Run Configurations and VM Arguments

One of the great things about an IDE like Eclipse is how easy it is to run your application on the fly, as you’re developing it. Eclipse uses the concept of a Run Configuration for this. Run configurations define your code’s entry point (i.e. the project and Main class), as well as numerous other aspects of the run, including the JRE to use, the classpath, and various arguments and variables for your application’s use.

If you’re a developer who uses Eclipse, chances are you’ve created your own Run Configuration. It’s simple. Just go to the Run > Run Configurations… menu item. In the dialog that appears, click the New icon at the top left, and create your new configuration.

You can then use the six or so tabs to the right to configure your runtime environment. One of the options you can configure, within the Arguments tab, is the VM arguments to be be passed to your application. For example, one of my web applications can be run in different environments; certain settings will change depending on whether I’m running in development, staging, or production mode. So I pass a -Dcom.taubler.env=x argument when I launch my application (where x might equals dev, stage, or prod). When I run my application in Tomcat, I simply add the argument to my startup script. Similarly, when I run my application through Eclipse, I can the argument to my Run Configuration, within the Arguments tab.

This works great when you have a single Run Configuration, or at least a small number of stable Run Configurations. But I’ve found an issue when running unit tests through Eclipse. It seems that whenever you run a JUnit test in an ad-hoc manner (for example, right-clicking on a test class and selecting Run As > JUnit Test) Eclipse will implicitly create a Run Configuration for that test. This can result in numerous run configs that you didn’t even know you’d created. That in and of itself isn’t a problem. However, if your application code is expecting a certain VM argument to be passed in, how can you pass that VM argument in to your test run?

Initially when I encountered this problem, I found a solution that didn’t scale very well. Essentially I would run a unit test for the first time, allow Eclipse to create the associated Run Configuration, and let the test error out. Then I would open the Run Configurations window, find the newly-created configuration, click into the Arguments tab and add my -D argument. At that point, I could re-run my test successfully.

It turns out there’s a better way. You can configure Eclipse to always, by default, include one or more VM arguments whenever it launches a particular Java runtime environment. To do this, open Eclipse’s Preferences window. Expand the Java section and select Installed JREs. Then in the main content window, select the JRE that you are using to run your project, and click the Edit… button. A dialog will appear, with an entry field labeled Default VM arguments. That’s where you can enter your VM argument;for example, -Dcom.mycompany.myarg=abc123. Close the window, and from then on, any unit tests you run will automatically pass that argument to the VM.

There are of course a few downsides. The first is that, as a preference, this setting won’t be included with your project (of course, this could also be seen as a benefit). Secondly, this preference is tied to a specific JRE, so if you test with multiple JREs, you’ll need to copy the argument for all JREs. Still, it’s clearly a workable, scalable solution.

Easily Manage Properties in your Java / Spring application

Question to the Java developers out there: properties are easy, right? All you need to do is to type up some key/value pairs in a simple file, read that file in when your application starts up (Java makes that a no-brainer), and refer to the appropriate property in your application code. If you’re using Spring, it’s even easier; just configure your beans in your app context XML file.

Well, that’s fine for a simple application that you’re deploying to your local test server. But for a production ready app, things are a little more complication. There are typically two things you’ll need to be able to deal with:

  1. You’ll like have different servers–test, staging, production, etc–each with their own different properties. For example, your test server will likely have different database credentials and connection URLs than your production server.
  2. You won’t want to store all of your properties in plaintext.
What we need is a way to separate the properties that differ across environments. Furthermore, we want to be able to encrypt some of these properties in case our application falls into the wrong hands.

Separate properties by environment

Let’s look at how we’d typically set properties in a Spring web application. For our purposes here, we’ll focus on setting our datasource’s properties: username, password, url, and the JDBC driver class name.
These would typically be set in your spring config file like so:
<bean id=“dataSource”
class=“org.springframework.jdbc.datasource.DriverManagerDataSource”>
<property name=“driverClassName” value=“com.mysql.jdbc.Driver” />
<property name=“url” value=“jdbc:mysql://127.0.0.1:3366/my_db” />
<property name=“username” value=“joe” />
<property name=“password” value=“foobarbaz” />
</bean>
Of course, any of these properties might change as you deploy to a different environment. Your production database might be running on its own server, for example. And you certainly wouldn’t want to use the same credentials in production as you do on your test server… right?
So our first step is to externalize these properties into a separate file. Let’s create a file called db.properties. This file should go somewhere in our classpath; for example, in com/mycompany/config in your compile target directory path. This file will contain our data source values, like so:
db.driver=com.mysql.jdbc.Driver
db.url=jdbc:mysql://127.0.0.1:3366/my_db
db.username=joe
db.password= foobarbaz
Now, how can we use this properties file? For this, Spring provides a handy class, org.springframework.beans.factory.config.PropertyPlaceholderConfigurer. This class takes a list of property file locations. It will read all of the files in the list, and store their properties to use as placeholder replacements.
To use a PropertyPlaceholderConfigurer, simply declare it as a bean:
<bean id=“propertyPlaceholderConfigurer”
class=org.springframework.beans.factory.config.PropertyPlaceholderConfigurer>
<property name=“locations”>
<list>
<value>classpath:com/mycompany/config/db.properties
</value>
</list>
</property>
</bean>
Now we can replace are hard-coded properties in our spring config file with these placeholders:
<bean id=“dataSource”
class=“org.springframework.jdbc.datasource.DriverManagerDataSource”>
<property name=“driverClassName” value=${db.driver}/>
<property name=“url” value=${db.url}/>
<property name=“username” value=${db.username}/>
<property name=“password” value=${db.password}/>
</bean>
At runtime, Spring will replace the ${…} tokens with their property values.
That works great, but what we really want is separate config files, one per environment. To do this, we copy our db.properties file to db-dev.properties and db-prod.properties (this assumes that we have two environments, development and production; of course, make as many copies as you have distinct environments.) Those files should of course continue to reside on our classpath. And of course, the values of each of the four properties should be changed to match the data source settings for the specific environment.
At runtime, we’ll want Spring to read the current environment’s db properties file. Fortunately, when it performs placeholder replacements, Spring will look for values not just read in by the PropertyPlaceholderConfigurer, but also system properties and VM arguments as well. So what we can do is to pass a VM argument on our startup script. This value will define the current environment. So in our dev environment we’d add this to our startup script:
    -Dcom.mycompany.env=dev
while in production we’d add this:
    -Dcom.mycompany.env=prod
Last, we adjust our PropertyPlaceholderConfigurer location to tell it which configuration file to read in.
<bean id=“propertyPlaceholderConfigurer”
class=org.springframework.beans.factory.config.PropertyPlaceholderConfigurer>
<property name=“locations”>
<list>
<value>classpath:com/mycompany/config/db-${com.mycompany.env}.properties
</value>
</list>
</property>
</bean>
The highlighted token above will be replaced with the value of com.mycompany.env set in our startup script. As a result, our PropertyPlaceholderConfigurer will read in the environment-appropriate file, and our dataSource bean will be populated with the environment-appropriate values.

Encrypt your credentials

Our second goal is to keep from bundling our database credentials in plaintext when we package and distribute our web application. To do this, we are going to override Spring’s PropertyPlaceholderConfigurer with our own implementation. This implementation will do one additional thing: when it encounters a property whose name ends with “-enc”, it will assume that the value is encrypted, and therefore decrypt the value just after reading it in.
First, let’s assume that we have written a class called DataEncrypter, containing two methods:
public String encrypt(String rawValue);

public String decrypt(String encryptedValue);

Their functions should be obvious; encrypt() takes a plaintext String and converts it to an encrypted String; decrypt() reverses the process. Both methods should of course rely on the same key(s). This tutorial will skip what an actual implementation of each method might look like; that’s for the security experts to provide. For information, look up information about the javax.crypto package. Instead, we’ll assume that a good implementation has been written.
The first thing we’ll want to do to encrypt our database credentials with our DataEncrypter instance, and put those encrypted values into our properties files. The most straightforward way to do this is to simply create a Java class with a main(String[] args) method, which uses our DataEncrypter to encrypt a value passed as args[0]:
public static void main(String[] args) {
DataEncrypter encrypter = …; // obtain an instance
System.out.println( encrypter.encrypt(args[0]) );
}
Run that class once per property that you want to encrypt; alternatively, you can modify it to expect multiple properties in args, and encrypt them all in one run. Next, we’ll swap those values in to the properties file in place of their plaintext version. We’ll also rename to property with an “-enc” suffix.
db.driver=com.mysql.jdbc.Driver
db.url=jdbc:mysql://127.0.0.1:3366/my_db
db.username-enc=f7BjAyDkWSs=
db.password-enc=4Q7xTCr5hZC9Ms6iTSjG3Q==
So how will those values be decrypted? We’ll need to create our own subclass of PropertyPlaceholderConfigurer (which we’ll call DecryptingPropertyPlaceholderConfigurer). PropertyPlaceholderConfigurer contains one important method that we’ll want to override:
protected void convertProperties(Properties props)
This method read in the properties from any file found in its list of file locations. We’ll want to look for any properties whose name ends with “-enc”, and invoke our DataEncrypter’s decrypt() method:
public class DecryptingPropertyPlaceholderConfigurer extends PropertyPlaceholderConfigurer {
private static final String SUFFIX_ENC = “-enc”;
private final DataEncrypter encrypter = …; // obtain an instance
private final Logger log = LoggerFactory.getLogger (getClass ());
@Override
protected void convertProperties(Properties props) {
// props refers to the contents of a single properties file
Enumeration<?> propertyNames = props.propertyNames();
while (propertyNames.hasMoreElements()) {
String propertyName = (String)propertyNames.nextElement();
String propertyValue = props.getProperty(propertyName);
// look for names such as password-enc
if (propertyName.endsWith(SUFFIX_ENC)) {
try {
String decValue = encrypter.decrypt(propertyValue);
propertyValue = decValue;
// normalize the property name by 
// removing the “enc-” prefix
propertyName = propertyName.substring(
0, propertyName.length() – SUFFIX_ENC());
} catch (EncryptionException e) {
log.error( String.format(
  “Unable to decode %s = %s”, propertyName, propertyValue), e);
throw new RuntimeException();
}
}
props.setProperty(propertyName, propertyValue);
}
}
}
Note that we strip the “-enc” suffix from each encoded property that is encountered. That we, we can continue to refer to our data source password, for example, as db.password rather than db.enc-password. Note also that if we encounter an error decrypting a property, we log the issue and throw a RuntimeException. This is generally the correct thing to do; we don’t want our application to run with partially-incorrect properties. One thing to note here is that you might want to remove the logging of the propertyValue. A common encryption problem is one where someone forgot to encrypt the value before adding it to the properties file. In such a case, you probably would not want the plaintext value hanging out in your log file.
The last thing to do is simply plug our subclass into our Spring config file:
<bean id=“propertyPlaceholderConfigurer”
class=“com.mycompany.DecryptingPropertyPlaceholderConfigurer”>
<property name=“locations”>
<list>
<value>classpath:com/mycompany/config/db-${com.mycompany.env}.properties
</value>
</list>
</property>
</bean>
Now, an important caveat is that this is not an end-all, be-all solution to securing your credentials. This approach is only as strong as your encryption algorithm, and it will never ensure 100% security. It’s not a substitute for simply ensuring that your source code and your webapp bundle remain only in your hands (or for taking other measures such as restricting access to your database server, not reusing passwords, etc). But should someone else get get ahold of your credentials, this approach should at least buy you enough time to change your credentials to prevent unauthorized access.

Done!

Now that you’ve set up your infrastructure, it should be easy to see how to set additional properties, whether in the same files we’d created above or in new files that you might create. Managing properties is once again very easy, but now it’s also very flexible and a tad more secure than it was before.

Profiles and certs and devices… oh hell!

Has it really been more than two years since I’ve written a post? It’s time to start writing again!

I thought that I would kick things off with something that’s cropped up again and again over those past 2+ years. And that is dealing with provisioning profiles when testing and deploying iOS apps. There may seasoned, full-time iOS engineers out there that have all of this down-pat. But for me, someone who doesn’t focus all of his attention on iOS anymore, this process can be perpetually confusing. I’ve tended to learn just enough to get things working, and when things would later mysteriously break, I’d flail around for awhile until things started working again.

If you’re like me, then read on. It’s time to figure this stuff out once and for all.

Provisioning Profiles

The first thing you learn is that everything centers around Provisioning Profiles. Provisioning profiles are really an aggregation of other elements that together determine whether an app can be built and run on a device. There are two types of provisioning profiles: Development and Distribution. As their names suggest, Development profiles are used to test ad hoc builds as you’re developing you app; Distribution profiles are used to distribute apps to the App Store.

We’ll focus on Development profiles here. Development profiles aggregate three things:

  1. Your App ID; for example, com.mycompany.mymobileapp. Each profile can contain only one app ID.
  2. A list of devices; specifically, device UUIDs. To install a build onto a device for ad hoc testing, that device’s UUID must be present in the development profile.
  3. A list of developers certificates that can be used to sign the profile.

App IDs

As mentioned above, each profile can contain only one app ID, so if you have multiple app IDs, you’ll need a separate profile for each. For example, my current company uses two app IDs: one for use when hitting our staging/test server, and one for our production server. So we need two separate development profiles.
Well, that’s not entirely true. You can get away with using wildcards if your app does not use certain features such as Apple Push Notifications. For example, if you have a simple app that uses the App ID com.mycompany.mymobileapp.testing when hitting a testing server, and com.mycompany.prod when hitting a production server, you can create a single Development profile with the App ID of
com.mycompany.*

Devices

To help enforce the “walled garden” App Store distribution model, development builds can only be installed on a limited number of devices. Apple allows each developer account up to 100 devices–identified by their UUIDs–to be registered. These UUIDs are stored in your development profile, ensuring that the profile can only be used on one of those devices.

Certificates

In going back over some earlier notes I had made, I saw that I had stated that profiles contain a list of developers who are allowed to sign the profile. This is not entirely true. Generally, each developer will have his or her own certificate–associated with his or her own private key–installed in his/her own OS X Keychain, for use in signing iOS apps. But this is not technically required. An organization could, for example, have a single certificate/private-key combination that every developer in the organization imports into his/her Keychain. This can make life easier, assuming of course that no developer ever leaves the organization.

The sum of the parts

Looking at development profiles as an aggregation of those three parts helps to make sense of why Apple established its requirements. A given iOS app (identified by the App ID), is allowed to be installed on a limited set of devices for testing purposes, and only certain developers identified by their certificates should be allowed to build that app.

What do you need to do?

So how does these concepts translate into practice? Well, let’s assume that you’ve set up your iOS developer account, but aside from that, you’re starting from scratch.
First, you’ll visit https://developer.apple.com and after logging in, make your way to the Identifiers section in your account. Create an App ID for you app. App IDs generally take the reverse-domain-name form, and incorporate the name of the product. For example, yours might look like com.yourcompany.yourproduct  As mentioned above, if you need to use more than one App ID for your product, you can either create multiple App IDs, or else create one using a wildcard (e.g. com.yourcompany.*)
Next, you’ll go to the Certificates section. Follow the instructions to create a new certificate, ensuring at the end that you download the certificate to you Mac. For starters, you’ll want to create a certificate of type iOS App Development. Once finished, double-click the certificate to install it, along with the associated private key, into your Keychain.
Then go to the Devices section, and enter the UUID of any devices on which you’ll want to test your app. There are plenty of resources online, such as this one, to help you find your devices’ UUIDs. Be sure to enter the UUIDs correctly; once entered, you won’t be able to remove any UUIDs for a year.
Finally, you’ll go to the Provisioning Profiles section. Create one of type Development / iOS App Development. When prompted, choose the App ID that you’d just created, select your new certificate, and the UUIDs of any devices on which you want to be able to test.
When finished, be sure to download the profile and double-click it to install it in your copy of Xcode. Alternatively, you can tell Xcode download and install the profile. How to do this seems to change from release to release. To do this in the current release of Xcode 5, you need to open the Preferences window, go to Accounts, enter your account info, then highlight your account name, click View Details, the finally click the refresh button at the bottom-left of the dialog.

When things change

Typically you’ll get this far, and after a bit of stumbling and tweaking things, you’ll get stuff working. But things will change. Here are some common scenarios you’ll need to deal with:

You get a new tester in your organization

This means that you’ll need to add your tester’s UUID to your account. Visit your developer account online, go to the Devices section as you did above, and enter the new UUID(s). Next, you’ll need to revisit the Provisioning Profiles section and edit your profile(s). Add the new devices to the profile. Then either re-download the profile and double-click it to re-import it into Xcode, or just tell Xcode to grab it and re-import it for you (as described above, using the Preferences window).

You get a new developer in your organization

That developer will need to have a certificate/private-key pair installed on their system that can be used to sign your app. As mentioned above, you have two options. The new developer can either create his/her own certificate, and s/he can use an existing one.
For the latter, the developer will have to access your organization’s developer account (you did give them access, right?) and visit the Certificates section to create his/her certificate. S/he will need to download the certificate and install it into his/her Keychain. The provisioning profile(s) will then need to be edited in the Provisioning Profiles section so that the new certificate is added. The developer–and ideally all of the developers in your organization–should then install the certificate into their Xcode instances, either by downloading and double-clicking the certificate, or by refreshing it from within Xcode.

What about Distribution Provisioning Profiles?

Distribution profiles are similar to development profiles, except that they don’t define the devices that are allowed to install the app. This makes sense, since presumably more than 100 pre-defined people will be using your app! Other than that, the concept is the same; your distribution profile must be tied to an app ID, and it must define the certificates that can be used to sign it.

That’s all for now

This article was intended to help shed light on the concepts associated with building iOS apps, rather than providing practical steps. But hopefully it will provide that little bit of of extra insight that will help you keep control over your iOS development process.

What Happens When You Add a Table Cell to a UITableView?

Seasons change. People change. And user interfaces change, usually dynamically.

One dynamic change often seen in iOS apps is the addition or removal of a table cell to a UITableView. Often, you’ll see this when switching a UITableView from read-only to edit mode (for a dramatic example, try editing a contact in the Contacts app.) I’ve done this a few times in some of my iOS apps. I’ve found a few little tutorials out there that describe generally how to do this, but most contain some information gaps. Here, I’ll try to add some of the insight I gained.

I’ll use code examples from my most recent such app. In this app, I have a “Task Details” UITableView in which the user can view the details of a task (think: todo list item). They can also tap an Edit button to edit those details.

Tasks can contain, among other things, a detailed description. Whenever you’re editing a task, a large-ish UITableViewCell containing a UITextView is present, allowing the user to enter or modify a description. However, when in read-only mode, I wanted to remove the UITableViewCell that displays the task’s description if the task in fact has no description (otherwise, it’s wasted space).

The top part of the table view when we’re just looking at the Task. Note that the “Go Shopping” task has no description.

 

Tap the edit button, and the description cell appears. In the real app, of course, it’s animated.

One other important detail for our discussion is that tasks also have simple names. When in read-only mode, of course, I use a standard UITableViewCell (of style UITableViewCellStyleDefault) to display the task’s name. When in edit mode, however, I want to replace that default cell with a custom one containing a UITextField, in which the user can edit the task’s name.

The first thing I did was to create a method called isTaskDescriptionRow:

 

– (BOOL)isTaskDescriptionRow:(int) row {
    if (tableView.editing) {
        return row == TASK_BASICS_DESC_ROW;
    } else {
        if (taskDetailsCellIsShowing) {
            return row == TASK_BASICS_DESC_ROW;
        } else {
            return NO;
        }
    }
}
taskDetailsCellIsShowing is a BOOL that should be self-explanatory. It gets set during the setEditing: animated: call, as shown a little ways below.
Next, I added code like the following to tableView: cellForRowAtIndexPath:
 
if (row == TASK_BASICS_NAME_ROW) {
            if (tableView.editing) {
                // create and return a table view cell containing
                // a UITextField for editing the task’s name
            } else {
            // create and return a default table view cell
                // displaying the task’s name
            }

 

} else if ([self isTaskDescriptionRow:row]) {
            UITableViewCell *cell = [[UITableViewCell alloc]
                        initWithStyle: UITableViewCellStyleDefault 
                        reuseIdentifier: nil];
            …
            [descTextView release];
            descTextView = [[UITextView alloc]
                        initWithFrame:CGRectMake(0, 0, w, h)];
            // set descTextView to be non-editable and non-opaque
            descTextView.text = task.description;
            descTextView.userInteractionEnabled = !self.editing;
            [cell.contentView addSubview: descTextView];
            return cell;
} else…
Delegating to isTaskDescriptionRow: makes it easy to determine whether the current NSIndexPath should display the current task’s description. isTaskDescriptionRow: will return YES for the appropriate NSIndexPath if the UITableView is currently in edit mode, or if the current task has a description. If so, then I create a UITextView, configure it, and add it to the current table cell.
As I’d mentioned, when the current task has no description, I want the description row to be added when the table view is entering editing mode, and removed when reverting to read-only mode. This is done in the setEditing: animated: method:
– (void)setEditing:(BOOL)editing animated:(BOOL)animated {
        [super setEditing:editing animated:YES];
        [tableView setEditing:editing animated:YES];
    
        if (editing) {
                if (![self taskHasDescription]) {
                    [self addDescriptionRow];
                } else {
                     [tableView reloadData];
                }
        } else {
               if (![self taskHasDescription]) {
                    [self removeDescriptionRow];
                } else {
                    [tableView reloadData];
                 }
        }
    
        [self.navigationItem setHidesBackButton:editing animated:YES];
        …
}
After invoke setEditing: animated: on the super class as well as the UITableView itself, I check to see if in fact we’re entering editing mode (e.g. if editing = YES). If so, and if the current task does not have a description (i.e. if ![self taskHasDescription]), I invoke my addDescriptionRow method which I will show below. If we’re leaving editing mode, and the current task does not have a description, then I invoke removeDescriptionRow, also shown below:
-(void)addDescriptionRow {
    [tableView beginUpdates];
    taskDetailsCellIsShowing = YES;
    NSIndexPath *idxPath = 
        [NSIndexPath indexPathForRow:TASK_BASICS_DESC_ROW
                        inSection:TASK_BASICS_SECTION];
    NSArray *idxPaths = [NSArray arrayWithObject:idxPath];
    [tableView insertRowsAtIndexPaths:idxPaths
        withRowAnimation:UITableViewRowAnimationFade];
    [tableView endUpdates];
}
 
-(void)removeDescriptionRow {
    [tableView beginUpdates];
    taskDetailsCellIsShowing = NO;
    NSIndexPath *idxPath = 
        [NSIndexPath indexPathForRow:TASK_BASICS_DESC_ROW
                        inSection:TASK_BASICS_SECTION];
    NSArray *idxPaths = [NSArray arrayWithObject:idxPath];
    [tableView deleteRowsAtIndexPaths:idxPaths
        withRowAnimation:UITableViewRowAnimationFade];
    [tableView endUpdates];
}

 

The basic approach in the add and remove methods shown above is to create an array of indexPaths representing the cell(s) to be added or removed. You then pass that array to either insertRowsAtIndexPaths: withRowAnimation: or deleteRowsAtIndexPaths: withRowAnimation:. Finally, Apple’s documentation encourages insertions/deletions to be wrapped within beginUpdates and endUpdates calls, so I do that as well.
That all worked, for the most part; the description cell appeared as desired. But I noticed a bit of a wrinkle. The first tablecell–the one which displays the task’s name–is supposed to transform into an editable UITextField when the Edit button is tapped. But that wasn’t happening.
You can see in the code blocks above that this should be done in tableView: cellForRowAtIndexPath:. When the indexPath represents the task name row (section == TASK_BASICS_SECTION, row == TASK_BASICS_NAME_ROW), I then simply check the value of tableView.editing. If YES, I create a UITextField and add it to the UITableViewCell; if NO, I simply return a default UITableViewCell. Therefore, all I need to do is to add a call to tableView.reloadData in the setEditing: animated: method, right?
Wrong. When I did that, I received the following error:
Terminating app due to uncaught exception ‘NSInternalInconsistencyException’, reason: ‘Invalid update: invalid number of rows in section 0.  The number of rows contained in an existing section after the update (3) must be equal to the number of rows contained in that section before the update (3), plus or minus the number of rows inserted or deleted from that section (1 inserted, 0 deleted).’
Hmm… time to look at my tableView: numberOfRowsInSection: method :
– (NSInteger)tableView:(UITableView *)tv numberOfRowsInSection:(NSInteger)section {
    if (section == TASK_BASICS_SECTION) {
        return [self numTaskDetailsBasicSectionTableRows];
    } else …
    }
}
Which delegates to this method:
– (int)numTaskDetailsBasicSectionTableRows {
    if (tableView.isEditing) {
        return NUM_TASK_DETAILS_BASIC_TABLE_ROWS;
    } else {
        if (taskDetailsCellIsShowing) {
            return NUM_TASK_DETAILS_BASIC_TABLE_ROWS;
        } else {
            return NUM_TASK_DETAILS_BASIC_TABLE_ROWS – 1;
        }
    }
}

 

NUM_TASK_DETAILS_BASIC_TABLE_ROWS represents the number of rows in that top section when the description cell is being shown (i.e. 3). At the point where I call tableView.reloadData, however, it appears that the description cell hasn’t actually been added yet, at least as far as UIKit was concerned.
This led to my primary question: what exactly is UIKit doing when it adds a table view cell? Clearly, it needs to obtain information about the cell being added. This is especially apparent in this case, since the new cell is an atypical, custom cell, both in terms of its content and its height. Yet because the task name cell wasn’t being updated, it didn’t appear to be invoking tableView: cellForRowAtIndexPath: at any point while adding the new cell. So I set a breakpoint at the top of tableView: cellForRowAtIndexPath: to test my theory.
It turns out, UIKit was calling tableView: cellForRowAtIndexPath:. But only once. Specifically, it was calling it for the indexPath I provided in the addDescriptionRow method; i.e. the indexPath of the new description cell.
Pretty clever, actually.
But in my case, it was preventing that task-name cell from being updated. Thus, I still needed to call tableView.reloadData, but I needed to do it after the table cell addition was complete. At this point, I started scouring the documentation, as well as various online discussion forums. What I was looking for was some way to register a callback to be invoked when the table cell was completely added. Unfortunately, I couldn’t find any.
I played around a bit with reordering certain calls and and performing some calls asynchronously. I was able to get the functionality to work, but the animation wound up being far from smooth–quite jarring, in fact. I was also able to get the functionality to work without animation at all, but I really wanted the animation effect. I finally settled on the following:
– (void)setEditing:(BOOL)editing animated:(BOOL)animated {
    … all the other stuff
    if (![self taskHasDescription]) {
        [self performSelector:@selector(reloadNameCell:) 
                   withObject:[NSNumber numberWithInt:0]
                   afterDelay:0.33];
    }
}
 
– (void)reloadNameCell:(NSNumber *)count {
    @try {
        NSIndexPath *ip = [NSIndexPath
                indexPathForRow:TASK_BASICS_NAME_ROW
                        inSection:TASK_BASICS_SECTION];
        [tableView reloadRowsAtIndexPaths:
                [NSArray arrayWithObject:ip]
                withRowAnimation:UITableViewRowAnimationNone];
    }
    @catch (NSException *exception) {
        int i = [count intValue];
        NSLog(@”Exception (%d) in reloading name cell, %@”, 
                        i, [exception description]);
        if (i < 5) {
            [self performSelector:@selector(reloadNameCell:) 
                       withObject:[NSNumber numberWithInt:i + 1]
                       afterDelay:0.125];
        } else {
            NSLog(@”Too many retries; aborting”);
        }
    }
}
If that looks like a hack, that’s because it is. Basically, we wait for 0.33 seconds, which seems to be just enough time for the call-adding animation to complete. We then invoke reloadNameCell: which at its core simply reloads the UITableViewCell that corresponds with the task name. We are also prepared to catch the abovementioned NSInternalInconsistencyException if we didn’t quite wait long enough. If we do catch that exception, we wait again for a short period of time and then try again. We track the number of retries and abort after 5 attempts; at that point, something really went wrong.
So there you have it. Hopefully that provides a little bit of insight as to what goes on under the hood while your UITableView’s structure is smoothly changing.

jQuery Mobile, AJAX, and the DOM

I’ve been messing around with jQuery and jQuery Mobile lately, and ran into an issue that’s worth exploring. It might be a newbie issue–I’m sure seasoned JQM developers don’t fall into this trap much–but I figure that I might be able to save new developers some frustration.

Let me first describe the app I was working on. The app basically provides online utilities that I and other engineers might find useful from time to time in day to day work. The utilities are grouped into different categories (for example, Date Conversion tools, Data Encryption tools, etc). While I started out developing these tools for desktop browsers, I figured I also ought to port them to a mobile web app as well. While the desktop tools are all displayed on one single page, however, the mobile app would present each category of tools (e.g. Date Conversion) on its own page. The “homepage” would present a simple listview, allowing the user to select a category of tools to use.

The homepage is contained in its own HTML page (named, unsurprisingly, index.html). Each tool category also has its own page; for example, the Date Conversion tools are located on dc.html. It was with this page that I ran into problems. The primary date conversion tool is one that takes milliseconds and converts them to a formatted date, and vice-versa. Given this, I’d decided to allow users to choose the date format they want to work with. To implement this, I used a multi-page template in dc.html  The first page div consisted of the tool itself; the second div consisted of a group of radio buttons from which the user could select a date format.

Stripped of irrelevant code, dc.html looks like this:

<!DOCTYPE html>
<html>
<head>

function addDfChoices() {
        // This method populates the radio buttons. Its existence becomes important later
}
$(document).ready(function() {
        …
        addDfChoices();
        …
});

</head>
<body>


<!– Start of main page –>
<div data-role=”page” id=”main”>

  <div data-role=”content” id=”main”>
    <span class=”section_content”>
      <label style=”text-align:baseline”>Millis since 1970</label>
      <a href=”#page_dateformat” style=”width:100px;float:right”>Format</a>
      
    </span>
  </div><!– /main content –>
</div><!– /main page –>


<div data-role=”page” id=”page_dateformat”>
  …
  <div data-role=”content”>
    <div data-role=”fieldcontain” id=”df_option_container”>
      <fieldset data-role=”controlgroup” id=”df_options”>
      </fieldset>
    </div>
  </div><!– /dateformat content –>
</div><!– /dateformat page –>


</body>
</html>

I loaded dc.html into my browser (I did a lot of testing in Chrome, but also of course on mobile browsers such as mobile Safari on my iPhone). The page worked as expected. On page load, the main page div was displayed, while a small Javascript instantiated different date formats and inserted them into the df_options fieldset located within the hidden page_dateformat page div. Tapping the Format link invoked a slide animation which loaded the radio buttons. Additional Javascript code (not shown above) registered any radio button selections and accordingly modified the date format used in the main page.
Except… sometimes things didn’t work. Specifically, sometimes tapping on the Format link took me to the homepage instead.
Playing around a bit, I learned that if I manually loaded dc.html into my browser, things worked fine. If I started on index.html and then linked to dc.html, and then tapped the Format link, that’s when things went wrong.
This is the relevant code within index.html:
<ul data-role=”listview” data-inset=”true” data-theme=”d”>
  <li><a href=”./dc.html”>Date Conversion</a></li>
  <li><a href=”./de.html”>Data Encoding</a></li>
</ul>
Nothing too special there.
Well, I decided to use Chrome’s DOM inspector to get a look at what was happening. I loaded dc.html and looked in the DOM inspector, and indeed I saw the two major page divs, main and page_dateformat. Then I loaded index.html and tapped on the Date Conversion link, and then took a look again at the DOM inspector after the date conversion tools page loaded. There was the main page div… but where was the page_dateformat div?
Here’s what jQuery was doing. When dc.html itself was loaded, the entire contents of the HTML file were loaded into the DOM. Since I was using a multi-page template, any div declaring data-role=”page” besides the first one (in my case, the page_dateformat div) was hidden… but it was still there in the DOM. The DOM looked roughly like this:
html
  head/
  body
    div data-role=”page” id=”main”/
    div data-role=”page” id=”page_dateformat” display=”hidden”/
  /body
/html
Now, when I started by loading index.html and then tapping the Date Conversion link, dc.html was retrieved but only the first data-role=”page” div was loaded into the DOM. Here, roughly, was the DOM at this point:
html
  head/
  body

    div data-role=”page”/  

            ^– this was the content div for index.html; it’s hidden at this point

    div data-role=”page” id=”main”/  
            ^– this is the first data-role=”page” div in dc.html, loaded via AJAX then inserted into the DOM
  /body
/html
Thus, when I tapped on the Format link (which, if you recall is marked up as <a href=”#page_dateformat”>Format</a>), the browser searched the DOM for a block with an ID of page_dateformat, couldn’t find one, and effectively gave up and reloaded the current page… which at this point was still index.html
At this point I reviewed the jQuery Mobile docs. The following little blurb provides a bit of a clue:

It’s important to note if you are linking from a mobile page that was loaded via Ajax to a page that contains multiple internal pages, you need to add a rel="external" or data-ajax="false" to the link. This tells the framework to do a full page reload to clear out the Ajax hash in the URL. This is critical because Ajax pages use the hash (#) to track the Ajax history, while multiple internal pages use the hash to indicate internal pages so there will be conflicts in the hash between these two modes.

I added data-ajax=”false” to the links on index.html  This solved the problem; dc.html was subsequently loaded completely into the browser’s DOM, allowing the page_dateformat div to be found. However, declaring data-ajax=”false” causes links to lose their animation effects. The pages are simply loaded the old-fashioned way.
I wasn’t too keen about that, so I removed the data-ajax=”false” declarations. I then changed the approach I took to the date format selection. Instead of a new “page” with radio buttons, I simply presented the user with a popup select list. For various reasons, I kept the dfChoices() Javascript function depicted above; the method was now invoked on page load to populate the select with date format options.
And immediately, I encountered the same success rate: when I started on dc.html, things worked great. When I started on index.html and linked to dc.html, the dfChoices() function was never invoked. And of course, the reason was the same as well. Much as the page_dateformat div hadn’t been loaded into the DOM, the dfChoices() function similarly was nowhere to be found.
At this point, I added data-ajax=”false”  back into the homepage’s links; for that matter, I also went back to using the multi-page template with radio buttons. There are certainly other solutions (separating out the radio buttons into their own html page and using some form of local storage to share state? putting Javascript functions into data-role=”page” divs instead of in the page header?) worth exploring, which I’ll do at some point. But just understanding the general issue is a good starting point.

Changing the cell type of an NSTableColumn

This one keeps biting me, although the solution is really simple. I’ve added NSTables to NIBs before, and by default, the table cells are all of type NSTextFieldCell. Well, in some cases I want a column to be represented by a different type of cell. Fortunately, Cocoa provides a number of different types of cells, such as NSButtonCell, NSPopupButtonCell, NSComboBoxCell, NSImageCell, NSSliderCell, etc. Unfortunately, how to change a column’s cell type from the default NSTextFieldCell to one of the others isn’t so apparent.

I’ve had to do this a few times, and usually I poke around Interface Builder to see what controls it provides; first while selecting the table column, then the table column cell, then the table itself. Nothing. And then I remember: all you need to do is to find the specific cell type in IB’s Object Library, and drag it and drop it on top of the column itself (the actual column itself, not the representation in the Objects list). Viola!

When hiring a new employee

Since I just joined a new company as a senior software engineer, I thought I’d write a few posts about my experiences, while they’re still fresh. This particular one is aimed at companies and hiring managers intent on hiring new employees like me. It’s advice, if you will, about how help candidates and new hires feel welcome in the company.

These points may be specific to my particular hiring scenario. But I think they’re general enough–and surprisingly common enough–that they’re worth pointing out. It’s also worth pointing out that there is still plenty that I liked about the company enough to join them (and to continue working there). Still, the whole process had room for improvement, and I figured hey, why not let other people learn from my experience?

Inside jokes during the interview

Try to keep them to a minimum. During the series of onsite interviews I had at this particular company, it seemed that every time my current interviewer ran across a coworker (during tours of the office, say, or when transitioning from one interviewer to another) the two would trade barbs of the “you’re lame” / “no, YOU’RE lame!” variety.

So what’s wrong with a little levity like that? Don’t get me wrong; I’d much rather work with people who have senses of humor, and who like having fun. But sitting in uncomfortable silence during these exchanges, with no real opportunity to join in (have you ever tried to join in on someone else’s inside joke, particularly when you don’t know the people sharing the joke? Don’t.) hardly instilled a sense that this was a team that was welcoming to new members.

Now that I’m a part of the company, I still cringe when I see new interviewees in the office, standing there awkwardly as their interviewer trades barbs with another employee.

In sum: candidates generally feel awkward already, and already know that they’re outsiders. You don’t need to reinforce that point.

Are your new hires people or numbers?

I joined the company as it was experiencing a large amount of growth. For the prior ten years it had remained a fairly small start-up, and only recently had started hiring aggressively. This was a fact which, it seemed, many of the old-timers loved to talk about.

For starters, every departmental meeting began with the observations like “wow, there’s a lot more people here than there were last week!” and “do you guys remember when we used to have our company all-hands in that little conference room?” It’s possible that comments like that were intended to impress upon what a great, exciting time it was to be at the company. But, at least for those of us who couldn’t remember that little conference room, it didn’t feel that way.

Instead, what we heard was that the old-timers were the ones who were the core of the company, the ones that mattered. The rest of us, we were just along for the ride on their coattails. And maybe that’s a valid point. But I know that as an employee, I’m most motivated when I believe, rightly or wrongly, that I’m an integral part of the company.

My new manager also decided that I was going to be the first new hire for which he wouldn’t send out a welcome email about me to the rest of the department. There were getting to be just too many people, he said, for these emails to be sent out. Most employees probably don’t even read them anymore. Well, maybe so. But would the token effort to show that you’re trying to integrate me into the rest of the company been so tough?

In sum: help new employees feel like they’re a part of the company, rather than doing the opposite.