What’s New With Kernl – April 2016

Over the past 4 months we’ve been making a lot of progress on many different fronts with Kernl.   After 4 new features, 5 feature updates, 3 infrastructure changes, and numerous bug fixes, Kernl is better than ever.  Check out the detailed info below, and leave a comment or reach out if you have questions.

New Features
  • Purchase Code API – A long requested feature has been the ability to add and remove purchase codes from Kernl via an API.  This has always been supported, but there wasn’t any documentation or examples of how to do it.  We now have detailed documentation for the Purchase Code API available at https://kernl.us/documentation/api.
  • WebHook Build Log – For customers using BitBucket and GitHub integration, it could be frustrating to figure why your build failed.  To help with that, we added a WebHook Build Log on the continuous integration page.  It can be found at https://kernl.us/app/#/dashboard/continuous-integration  Webhook build log
  • Envato Purchase Code Validation – Another often requested feature was the ability to validate against Envato purchase codes.  You can read about how to use and enable this functionality at https://kernl.us/documentation#envato-purchase-code-validation.
  • Caching – Since the beginning of the year, Kernl’s traffic has more than doubled and isn’t showing any signs of slowing down.  Kernl Traffic  To keep response times and server load down, update check results are now cached for 10 seconds.  What this means for you is that after you upload a new version or make any changes in the ‘edit’ modal, Kernl will take a maximum of 10 seconds to reflect those changes on the update check API endpoints.
Feature Updates
  • PHP Update Check File Timeout – In the plugin_update_check.php and theme_update_check.php files that you include in your plugin and themes, the timeout value for fetching data from Kernl is set really high by default (10 seconds).  If you want the update to fail fast in the event that Kernl is down, you can now configure this value using the remoteGetTimeout property.  Depending on how close your client’s server are to Kernl and how fast Kernl responds, you could likely lower this value significantly.  You should exercise caution using this though.  The documentation has been updated here and here to reflect the change.  You will also need to publish a new build with the updated PHP files.
  • Email Notification Settings – You can now enable/disable email notifications from Kernl.  There are two types: General and Build.  General email notifications are all emails Kernl sends to you that aren’t build emails.  Build notifications are the emails you receive when a webhook event from BitBucket or GitHub triggers a build.  You can modify these settings in your profile.
  • Failed Build Email Notifications – You will now receive an email notification when your BitBucket/GitHub webhook push fails to build in an unexpected way.  For instance if the version number in your kernl.version file doesn’t follow semantic versioning, the build would fail and send you an email notification.
  • Indeterminate Spinner for Version Uploads – Depending on the size of your deliverable and the speed of your connection, the Kernl interface didn’t give a lot of great feedback when you were uploading a file.  An indeterminate spinner now shows while your file is being uploaded.  Copy has also been updated to reflect that this action can take a little while.
  • Filterable Select on Repository Select Drop Downs – When trying to select a repository for continuous integration, it could be a real pain if you had lots of repositories.  A filterable select field is now in place that allows you to search large lists easily.
Infrastructure Changes
  • Capacity Increases – In mid March we had about 4 minutes of downtime in the wee hours of the morning while we upgraded our server capacity.  Current capacity should hold until we double or triple our traffic levels.
  • Mandrill to SendGrid Migration – Since the beginning of Kernl we used Mandrill as our transactional email provider.  As I’m sure some of you know, Mandrill sort of screwed it’s customers by making their free low-volume plan cost $30 per month.  Since this isn’t really something we wanted (or needed) to pay for, we migrated to SendGrid.
  • Apache to Nginx Migration – As our traffic numbers started to rise, Apache started to fall over on us.  A migration to Nginx as our reverse-proxy was high in the backlog, so instead of tweaking Apache we just did a quick migration to Nginx.  With the default configuration, load levels dropped from 1 – 1.5 to 0.3 – 07 with no configuration tweaking.  *high five nginx*
Whats Next?
  • Multi-tier Server Architecture – Kernl started out as a fun side project.  As a side project, keeping things simple as long as possible is almost always the right choice.  Now that Kernl has a growing number of paying customers, and those customers have lots of paying customers, it’s time for Kernl’s server architecture to grow as well.  Over the next month or two, we’ll be teasing apart Kernl’s current infrastructure to support horizontal scaling and automatic failover in case any node in the stack goes down.
  • Better Change Log Support – The current change log support on Kernl is… meh, at best.  A big goal for the next month or two is try and get better change log support out the door.
  • Analytics – Having some insights into your clients has always been a goal of Kernl.  Doing this efficiently and cost effectively is tough, but we’re 60% there already.  Infrastructure work has a higher priority than this right now, but getting this out the door in the next few months is a priority.
  • Bug Fixes – As always, bug fixes come first.
Other News

When you log in to Kernl, near the top you see a few boxes with general stats in them.  The ‘update pings’ stat is going to be off for awhile until the new analytics work is complete.  This is due to the naive way that we currently calculate update pings not being compatible with how we cache.  The ‘update downloads’ stat is still accurate since we do not cache the download endpoints.

WordPress Development as a Team

At my day job I’m really the only person that knows how to write WordPress plugins, so when I write one it’s usually sand-boxed on my machine where nobody can touch it. However, in a side endeavor I’m part of we have a team of 3 people developing on one plugin. As I’m the most experienced plugin developer amongst our team, I was tasked with coming up with a development style and plugin architecture that would work for us.

Development Style

Everyone will be running a local copy of WordPress and making their changes to the plugin locally. The plugin itself will be under version control using Git, and developers will push/pull changes from either a self-hosted Git server or Git Hub. Database schema will be tracked in a file called schema.sql. When someone makes a change to the schema, it goes into that file at the bottom with a comment about why the schema changed. We’ll being jQuery as our Javascript framework of choice, and we’ll be writing all of our Javascript in CoffeeScript (see my previous entries).

Plugin Architecture

The more difficult aspect of developing this plugin as a team is the sheer size of the plugin. Realistically this could probably be split into about 6 different plugins by functionality, but we want to keep everything together in one tidy package. To illustrate the architecture, I made a quick drawing.

The first layer of the plugin is essentially a wrapper. It initializes the ORM that we are using to access the database (we are using a separate database for this plugin’s data), and includes the wrapper class. The wrapper class is where developers drop their sub-plugin include file and instantiate it’s main object. For instance, for each sub plugin there will probably be two classes instantiated in the wrapper. One being admin related functionality, and the other being for front-end display functionality. My thinking with this architecture was that we could all work on separate sub-plugins without crossing paths too frequently. This also allows us to separate the different functionality areas of the plugin in a logical manner. The other benefit to architecting the plugin like this is that it will be very easy to port to a different architecture in the future. I’m well aware that WordPress probably isn’t the best tool for the job, but it is the best tool for the team with the deadline that we have.

Thoughts

While thinking about WordPress Plugin Architecture, I cruised the source code of a lot of plugins and it seems that everyone goes about it in a different way. If you’ve ever developed a large-scale plugin with a team, how did you go about doing it? Did you run in to any problems that you didn’t foresee at the beginning of the process?