Thoughts from the team
The importance of real text on a webpage is now more important than ever before. A few years ago designers attempted to overcome the limitations of the web by using images to replace ‘boring’ text. Aesthetically it was great, allowing designers full control over all aspects of typography but it made it impossible for machines to understand the content of the page. It didn’t take long to realise the obvious shortcomings and today it seems like a distant memory. In today’s web, structured data has introduced a extra layer of compatibility between human and machine.
This week and to much fanfare, Microsoft launched Windows 10. Three years after their much misunderstood Windows 8 release Microsoft are hoping they have finally found the right balance between their desktop and tablet/touchscreen interface. Many articles have been written about the reasons why Microsoft skipped Windows 9, and about how, in a radical departure from its traditional paid software model, they have decided to follow Apple’s lead and offer Windows 10 as a free upgrade for anyone running Windows 7 or 8 (for the next year anyway).
GCD Technologies is a software company that creates it’s own software as well as writing software for clients. It doesn’t matter who we write software for, it always gets tested. Unfortunately the level of testing can vary from project to project and correlates closely to budget. We perform most of the testing ourselves, but we also encourage our customers to become involved as much as possible.
On a recent iOS project, we noticed a stream of very polite but quite ominous error messages appearing in the application logs when the app was run on a device.
Anyone who does iOS or Mac development knows the importance of the Model-View-Controller (MVC) pattern within the Cocoa and Cocoa Touch frameworks. Put simply, the aim of MVC is to separate your data (the model) from the presentation of the data (the view). This is achieved through the use of an intermediate class (the controller) that is responsible for collecting your data from the model, and passing it to an appropriate view.
In theory this is a great system – your model data is completely self-contained and your view does not need intimate knowledge of the model in order to display it because the controller handles the fetching of the data. Likewise your data does not need to be concerned with how it will be presented – the controller is responsible for taking model data and inserting it into the right places in the view, or views. In practice this separation of concerns has a tendency to break down as the lines between model, view and controller get blurred.
Removing Model Logic from Controllers
While there are a number of places that the theory goes awry, one of the most common is in the link between the model and the controller. The relationship usually starts out fine – the model is simple and the controller can make relatively simple calls to get the model data it needs. With time and increasing model complexity the controller will find itself having to make more carefully considered calls to retrieve specific data objects, and possibly perform filtering on those objects before handing them off to the view.
The action of filtering the model data is what we’re focussed on here as we very often use
UITableViews in iOS to present a list of items before tapping to view the details of a specific item. In terms of MVC the controller (often a
UITableViewController) can ask the model for a list of the items, which it can then provide to the view (the
To narrow the focus of the list to items that match specific criteria the controller can either ask that the model only provide a filtered list or it can filter a complete list that is handed to it by the model. We are going to focus on the latter approach in this post, but both approaches are valid and which is used depends on factors that are beyond the scope of this post.
An efficient way of filtering data is to use the
-filteredArrayUsingPredicate collection methods. Allowing the controller to perform its own filtering of the list can introduce a strong coupling between the controller and the model. Creating a predicate requires intimate knowledge of the model object’s properties and how they are interpreted.
Say, for example, your application contained an array of
Users that your controller needed to filter in order to produce a list of active users. Right now an active user is defined as one that has logged into your service in the last 2 weeks. To filter the array of users a controller would need to create a predicate such as the following:
NSPredicate *predicate = [NSPredicate predicateWithFormat:@"lastLogin > %@", dateTwoWeeksAgo]; NSArray *activeUsers = [users filteredArrayUsingPredicate:predicate];
The drawback here should be immediately obvious – if the criteria for an active
Userchanges then any controllers that rely on this criteria will need to be updated.
A more manageable alternative is to encapsulate the process for determining an active user into the
User model itself. The predicate can be constructed internally and returned to the controller as follows:
NSPredicate *activeUserPredicate = [User activeUserPredicate]; NSArray *activeUsers = [users filterArrayUsingPredicate:activeUsersPredicate];
We’ve established that keeping the predicate logic in the model class is a good way to reduce the dependency between the controller and the model, but what happens when the model changes and the criteria for a predicate becomes more complex? If you followed the previous advice your model can only return a single predicate so one option is adjust the construction of the predicate to accommodate the extra conditional logic. In our
User example we want to state that an active user has logged in during the last 2 weeks and has made some sort of minimum contribution:
NSPredicate *complexPredicate = [NSPredicate predicateWithFormat: @"lastLogin > %@ AND postCount > %@", dateTwoWeeksAgo, minPostCount];
This can be returned to the controller for determining active users as before, but has become less reusable. If we want to reuse the last login check or the post count checks elsewhere it will require duplicate part of this predicate. Worse still, if we change the last login or post count criteria it may require updating multiple predicates.
Fortunately we can obviate this need by utilising the
NSCompoundPredicate class. This handy class allows us to construct complex predicates by combining simpler predicates with logical conditions such as AND, OR and NOT. In our case we can use an AND condition as follows:
NSPredicate *lastLoginPredicate = [NSPredicate predicateWithFormat:@"lastLogin > %@", dateTwoWeeksAgo]; NSPredicate *postCountPredicate = [NSPredicate predicateWithFormat:@"postCount > %@", minPostCount]; NSPredicate *activeUsersPredicate = [NSCompoundPredicate andPredicateWithSubpredicates:@[lastLoginPredicate, postCountPredicate]];
postCountPredicate can now be created as class methods and be re-used throughout the model safe in the knowledge that they can be easily maintained. And because
NSCompoundPredicate is a subclass of
NSPredicate it can be safely returned for use anywhere that an
NSPredicate is expected, allowing simple conditions to be increased in complexity without your calling code needing updated to handle it.
I sat down to my first interview with GCD Technologies just over a year ago. It was an interesting experience – part interview, part chat, part pummelling for information about Git…
In retrospect it was obvious – I had used Git and they were interested to know what my experience with it had been like. They were heavily invested in Subversion and had been for quite some time, yet they were mighty interested in my opinion of Git. My opinion was simple – Git is a much better fit to the way I worked and I felt it could be the same for GCD.
Whilst I had used Subversion before I was pleased to start my first day with the news that GCD was intending to trial the use of Git. Even better, the first project in the trial would be my first project with the company so I wasn’t having to re-immerse myself in Subversion for a while yet.
One immediate advantage to adopting Git was that there was no real need to get too worried about server infrastructure. Getting started was as simple as creating a new project in Xcode and ticking the box marked “Create local git repository for this project”. After a few days of local commits we needed to start collaborating so we looked at the options available to us.
As Git is a distributed version control system (DVCS) we considered hosting a repo on a local machine – connecting in via SSH to push and pull commits – but it would have been foolish to overlook the availability of providers such as GitHub, Bitbucket and Beanstalk. While the popularity of GitHub had an obvious appeal, we elected to go with Bitbucket simply because we were able to create free private repositories with multiples users for the trial – something neither GitHub nor Beanstalk provided (and still don’t).
The first few weeks were a somewhat timid affair – we each worked primarily on our master branches, pushing regularly and hoping to avoid collisions. It wasn’t long before we realised that we really weren’t taking advantage of the power that Git’s low-cost branching model offered us. We quickly jumped on the idea of working on feature branches allowing us to start pushing our features up to Bitbucket so that we could keep up with each others’ work. The natural evolution to this was to adopt a pull request-based workflow in order to start performing code reviews.
Pull requests are an extremely compelling feature of services like Bitbucket and GitHub. While code reviewing is a common practice across many version control systems, the combination of integrated commenting systems and readily merged branches have really helped propel the popularity of Git.
The efficacy of the pull request system can also make or break a service. After a few months of working with Bitbucket we hit a limitation – it wasn’t possible to comment on individual lines of code in a pull request. Combined with a few service outages, this caused us to reconsider our decision to go with Bitbucket.
By this point the trial was deemed to have been successful. Our small team that had started out using Git could regularly be found extolling its virtues to anyone they could corner. Despite the danger of becoming VCS bores we successfully conveyed the key benefits to the rest of the company and it wasn’t long before we had an official GitHub account and were putting our money where our mouths were.
All current projects were migrated over to GitHub and the process of creating feature branches destined to be code reviewed and merged back in by pull request became a standard part of our workflow. We’ve seen immediate benefits from this: our coding is less of a solitary affair and more of a social experience.
People are sharing tips, tricks and advice and it’s fair to say that code quality is improving as a direct result. Bugs are being caught much quicker because we are able to check out code and run it locally for testing. While this could have been done with Subversion, the process of creating a branch and getting it merged back in was seen as a thing to fear. Branch creation and merging in Git has much less stigma attached, and suddenly branching has become the norm instead of the exception.
In some circles its considered that unless your using Git on the command line you simply not doing it right. While it is true that the command line is where Git was born and probably the only place where you can utilise all its features, we found a variety of GUI clients which make a lot of day to day tasks quicker and easier. SmartGit became a quick favourite for Windows users, while those of us running OS X were quick to jump on the excellent Tower client. It has a strange limitation of not being able to display the contents of more than one repository at a time, but the ability to drag and drop branches to perform actions like branching, merging, publishing and pushing make it a refreshing change from having to remember all the command line syntax.
Using a service like GitHub means that we no longer needed to invest as much time and infrastructure in order to maintain a local Subversion repository. This saves development time as the burden of server maintenance was removed. The fact that each project was a separate repository meant that we no longer had a single point of failure – if our Subversion repository was corrupted, everything was halted until we got the backups sorted. Even if a service like GitHub goes down for a while the distributed nature of Git means that we’ve got at least one full copy of any repo in the office that can be shared out if necessary.
It’s fair to say that we’re now firmly a Git shop. We still have our moments where the power and flexibility catches us out and submodules have taken a little bit of getting used to when moving from Subversion externals. On the whole the process of switching has been fast and successful. Our advice to anyone who is considering moving from an older centralised version control system (like Subversion, Perforce, or CVS) is simple – give Git a trial and you won’t have any regrets – we don’t!
It’s been a long time since I’ve done any iOS development in Objective C. A really long time. When I last looked at it, I wasn’t exactly fluent. Things were still a little alien to me. Although I was able to get through the work I had to do ok, some of the concepts, workflows and development patterns were very different to my day-to-day comfort zone in my web development environment. Going on and off a language as and when development requires is no way to learn it, so I’ve set myself a day to get back into the swing of things.