I'm no expert, but... No, I'm no expert. I am part of a Facebook group though, I'm part of a lot of Facebook groups, I'm a little obsessed, but this Facebook group in particular is a group for sourdough bakers. A post went up yesterday asking about the proper way to store sourdough. Most commenters were using a Kilner style clip top jar, which was no surprise, I use one myself. What did surprise me, shocked me to my core in fact, was that almost all of my brethren were advocating the removal of the seal from their clip top jars. I really, really don't think you need to do this, and I'd like to explain why I think this is the case.
Firstly, sorry, do what you like, I'm just as sure that taking the seal off isn't hurting the starter. I'm not telling what to do, it just seems a shame not to use the seal that you already own.
I've been storing starter in intact clip tops for over five years now, which is pretty mad when I think about it. But I appreciate that some of you have been working with starter for a lot longer.
The conundrum is this, fermentation creates CO2 that needs to go somewhere. But, you don't want to leave your starter open to the elements, as it'll dry or possibly pick an unwanted yeast or other infection.
Clip top jars solve this, by creating a rubber or silicone seal, you prevent the nasties from getting in. Then, as CO2 builds up, the pressure forces the lid up, allowing gas to escape.
The arm wrestle is between the biceps of the glass and the metal clip! Who will win? It's the glass, usually, see below.
But what are the mitigating factors?
Quality of the starter. Personally, I don't think my starter has it within itself to break an ice cream wafer. I am terrible person though who regularly starves the poor fella. The flip side to that is that maybe you're a highly motivated baker, who regularly feeds (and presumably uses) your starter. In which case, you're opening the lid a lot, otherwise known as burping.
Either you're treating it mean, and it's totally lack lustre, or treating it keen, and the pressure really doesn't get a chance to build up.
Quality of the jar. If it's going to break it's going to break. In my experience the metal gives way before the glass. That was my fault though, and I promise not to put them in the dish washer anymore. Probably don't put your starter in a jar with a hairline crack.
I also brew beer, and actually I've not been terrible recently. Fermentation of a decent beer is a lot more active than your average sourdough, so you tend to use airlocks. Airlocks are also great as a visual indicator of the potency of the fermentation. Vigorous bubbling is a good sign you've done something right. I bring this up for two reasons:
Bottle priming. After the primary fermentation has finished, many brewers reignite the fermentation to carbonate their beers. Before pouring their beer into flip top bottles, they add a little sugar. The seal of the bottle forces the CO2 into the beer, creating bubbles. You can over do it though, and I know people who have had exploding bottles. I guess sometimes, the metal does win. But again, mitigating factors. In beer head space is an absolute no no. You want to fill those bottles right to the top. You'd never do that with a starter, you want it to grow. I've done something potentially pretty silly, but so far (🤞), it helps proves my point. I hate waste, but in my last brew I ended up with some surplus wort (unfermented beer). I couldn't throw it away, so I bunged it in a clip top (with a smidge of yeast) and put it in the cupboard.
A day later it sounded like a kettle was boiling in the kitchen. After hunting around, I realised it was the leftover wort/beer. The pressure was forcing the gas through seal, creating a slight whistle. I was so worried I'd created a beer bomb, that I went out and bought a fermentation jar with built in airlock, then decided it was a great experiment so haven't used the new jar. So far, no explosions. Crucially though, a similar amount of headroom to that of my starter.
The picture. To the left, a summer fruit wheat beer. Centre, my poor starter. I don't usually measure it's ability to grow, but I'm hoping to post a picture later to show I haven't died due to a sourdough explosion. To the right, unused fermentation jar.
I hope this is of some help, or prompts someone to demonstrate how utterly wrong I am. If your starter does explode, you should never believe what you read on the internet.
]]>I built an app! I've built few, but this one is especially special (to me).
Flood Aware is an app for tracking water levels in your local area. The data is sourced from the Environment Agency (EA). The app started off as a personal project after buying a house right on the water, last summer; the canal to the front, and the river behind.
Insuring a house by the water, you're forced to think about flood risks. And while our new house wasn't affected by the 2012 floods, the notion of flooding became a bit of an obsession. I'm total nerd, so discovering that EA tracked water levels, was a revelation! Conversations around the dinner table centred regularly on the previous day's water levels. We're all pretty nerdy.
Checking the locals levels though, was a bit of a clunky experience. I really just wanted an app that remembered my location and checked the local levels, with a nice graph. Life is alway better with a nice graph. I started putting something together. Progress was slow, when you write code all day long, there is not often a lot of motivation to start again in the evening!
The project had stalled on the run up to Christmas, I don't remember giving any serious thought to flooding (the odd joke, that is not overly funny now). Then Boxing Day happened, and we were very lucky in actual fact. Houses on our road are still uninhabitable now. In contrast, it was really just our garage that was affected. With a looming threat of more rain on the way, water levels were a renewed focal point.
Over the course of the next few days, I finished the initial version of Flood Aware and wasted no time submitting the app to Apple and Android stores. The reaction has been fantastic. Seeing people use the app and discussing it on social media, has been a real thrill.
The app, which is free, continues to improve. You can find your nearest water station, check water levels for the last three days, and be alerted to current flood warnings. The app is available on iOS, Android, and Google Chrome.
You can find more info, and links, here.
]]>I love this effect I've being seeing in iPhone apps recently. Not sure what it's called, but I believe it has something to do with Xcode's Auto Layout feature. You know, it's the stretchy image at the top of Facebook's Instant Articles pages.
The prevailing feature of these images, is how they react to momentum scrolling. As you pull down the page down (putting the page in a negative scroll position), the header image stretches to accommodate the additional space. The image maintains aspect ratio as the image stretches, creating a zoom effect.
NOTE: Momentum scrolling is essentially the ability to over scroll your view.
This post is a result of me wanting to recreate this effect in JavaScript, for use in Cordova apps. Long story short, I cracked it; with a couple of special considerations.
As I'm sure you're aware, scrolling a HTML element triggers a scroll event, which happens for every change in position. Not so with momentum scrolling (or, over scroll). Let's clarify that. If you're already scrolling and you go into an over scroll, you will indeed generate scroll events, indicating negative scroll positions. However, there are two key scenarios that don't generate scroll events:
Furthermore, I suspect the snap back animation hides the true position of the content (until complete), as I've not managed to track the animation using timers. I've not tried requestAnimationFrame
in a loop, but I'm not optimistic.
For the sake of speed, I did my initial tests in Safari mobile, rather than a Cordova container. There are also some advantages, debugging wise, working directly in Safari. It stuck me that this experiment could have applications beyond Cordova; Safari web applications for instance. Alas not, I was getting some weird results that weren't making much sense at first.
The unexpected results, were due to the window's over scroll. Within a HTML page, you have to explicitly set which elements you'd like to over scroll, by adding -webkit-overflow-scrolling: touch;
to the CSS of your scrollable element. Unfortunately for Safari Mobile, and by extension Safari web applications, the whole window over scrolls by default.
NOTE: The same happens in a Cordova container, but can be disabled using the DisallowOverscroll
preference.
This effectively nullifies any attempt to over scroll a HTML element from a zero position. Attempting to over scroll an element, that is at a starting position of 0px
, results in an over scroll of the whole window.
I feel like this is still something that can work outside of Cordova, which I will pursue at a later date.
Brace yourself. My implementation exists in a MVC structure, using Babel and Sass. The outcome relies heavily on jQuery, I imagine there would be small gains to be had, by removing it.
For the HTML, I placed the header image outside the scrollable container. Feels like a cheat, but I've stuck in an empty div (div.content-spacer
) above the actual content, the reason for this will become clear when I discuss the CSS.
<div class="image-stretch"></div>
<div class="scroll-parent">
<div class="content-spacer"></div>
<div class="content-area">
<p>
Lorem ipsum...
</p>
</div>
</div>
The image itself is absolutely positioned behind the scrollable content. I have an empty div (div.content-spacer
) above the content, to ensure the image is visible.
The space provided by the empty div, is 20px shy of the size of the image. This provides a buffer for the over scroll animation, which I like. It's not necessary for the effect to work though.
.smooth-operator
is a class that allows the conditionally application of transition effects, basically to track the snap back animation by using a similar transition duration.
.image-stretch {
background-image: url('../images/stretch.jpg');
background-position: 50% 50%;
background-size: cover;
height: 200px;
position: absolute;
left: 0;
top: 0;
width: 100%;
}
.smooth-operator {
transition-duration: 250ms;
transition-property: height;
}
.content-spacer {
height: 180px;
}
.content-area {
background: #fff;
padding: 6px 10px;
}
I wanted the script to react to every pixel movement, without having to deal with same pixel twice. _scrollTop
ensures this, by acting as the script's 'debounce'.
The finished script deals with two effects; the over scroll stretch, and a slight parallax rollup (as the image disappears off the screen). I want to talk about the parallax effect first, because while it wasn't the point of the experiment, I think it adds a nice bit of fluidity to the scroll. The effect comes at a price though.
As you scroll down the content (moving your finger up), the image tracks the content at a quarter speed of the scroll, see:
$imageStretch
.css('transform', `translateY(-${scrollTop / 4}px)`);
I think it looks great, so I've kept it. But the cost is, if you momentum scroll back to the top of the page, the content will hit the zero position before the image realises what is going on (no scroll event). So, there is a slight jump as the image realigns itself (as the result of a touchend
event).
else if (scrollTop === 0 || scrollTop >= imageHeight) {
$imageStretch
.css('transform', `translateY(0px)`);
}
To try and cheat the imbalance, the script above resets the Y position of the image to 0px
, the moment the image is out of view. Meaning that the image is already in it's starting position, should the user surprise us with a momentum scroll. The slight jump can still be seen if you momentum scroll with the image only half visible. I just saying, you could make the overall user experience more cohesive, by not tracking the content scroll at all.
The main attraction is the stretchy image zoom. As the finger pulls the view into over scroll, the image stretches to compensate. During the stretch, CSS transitions are disabled, which are then enabled during the snapback. The 250ms
transition duration has worked quite well for me in tests.
const imageHeight = 200;
let _scrollTop;
$('.image-stretch')
.on('webkitTransitionEnd transitionend', function() {
$(this)
.removeClass('smooth-operator');
});
$('.scroll-parent')
.on('scroll', function() {
const scrollTop = $(this).scrollTop();
if (_scrollTop === scrollTop) {
return;
}
_scrollTop = scrollTop;
const $imageStretch = $('.image-stretch');
if (scrollTop <= 0) {
$imageStretch
.height(imageHeight + Math.abs(scrollTop));
if (scrollTop === 0) {
$imageStretch
.css('transform', `translateY(0px)`);
}
}
else if (scrollTop > 0 && scrollTop <= imageHeight) {
$imageStretch
.css('transform', `translateY(-${scrollTop / 4}px)`);
}
else {
$imageStretch
.css('transform', `translateY(0px)`);
}
})
.on('touchend', function() {
const scrollTop = $(this).scrollTop(),
$imageStretch = $('.image-stretch');
if (scrollTop < 0) {
$imageStretch
.addClass('smooth-operator');
$imageStretch
.height(imageHeight);
}
else if (scrollTop === 0 || scrollTop >= imageHeight) {
$imageStretch
.css('transform', `translateY(0px)`);
}
});
You can see the code here. To run it yourself, ensure you have Gulp and Cordova installed globally (npm i -g cordova gulp
).
Run gulp build
from the project root, and cordova build ios
for the cordova directory. From there, you can run the project in Xcode. I will get around to creating a README, promise.
The test project was built using a yo generator I'm working on, called ml. Which is based on an MVC app framework I'm working on, called middle-layer.
]]>In this post I'm going to give you a quick demo of how easy it is to incorporate ES7's proposed Async/Await functionality into your existing ES6 code. To build the code, I'm using Babel with 'es7.asyncFunctions' enabled. You can read about my Gulp setup here.
Consider the code below:
function asyncFuncA() {
return new Promise(function(r) {
setTimeout(() => { r('asyncA'); }, 2000);
});
}
function asyncFuncB() {
return new Promise(function(r) {
setTimeout(() => { r('asyncB'); }, 1000);
});
}
class AsyncController {
render(template, data) {
return new Promise(function(resolve, reject) {
// Do render stuff
resolve({ t: template, d: data });
});
}
asyncAction(route) {
return asyncFuncA()
.then(function(a) {
return asyncFuncB()
.then(b => { return [ a, b ]; });
})
.then(data => { return this.render('route', data); });
}
}
let c = new AsyncController();
c.asyncAction()
.then((obj) => { console.log(`${obj.d[0]} + ${obj.d[1]}`); });
We're looking at a Controller class with a couple of actions. The asyncAction
function of AsyncController
is what we're interested in. The action resolves two promises, one after the other, before calling render
with the results of the two promises. In a previous article, we already removed a couple of callbacks with Promise.all
.
asyncAction(route) {
return Promise.all([ asyncFuncA(), asyncFuncB() ])
.then(data => { return this.render('route', data); });
}
A lot cleaner, but we can do better. async
and await
are keywords that, when used together, allow you to write asynchronous code without callbacks. async
creates a container, within which you can execute promises (prefixed with await
) that halt the current scope, until the promises have resolved. The resulting values of said promises are returned in the same way would expect a synchronous function to behave.
What is important, is that this only happens within the async
container, which is itself becomes a promise. In the following example p
and a
are roughly equivalent.
function p() {
return Promise.resolve('hello world');
}
async function a() {
return 'hello world';
}
p().then((r) => { console.log(r); });
a().then((r) => { console.log(r); });
What I think is particularly neat, is that class functions can also be decorated with async
. So we can use this 'syntastical' sugar on our original example to create:
async asyncAction(route) {
return this.render(route, [ await asyncFuncA(), await asyncFuncB() ]);
}
The code above is equivalent to the asyncAction
functions of the previous examples. I mean, pure, wow factor. It's so awesome, I'm giddy. Deep breaths, carry on. asyncFuncA
and asyncFuncB
are both functions that return promises. These promises both return simple strings, after different timeout periods, during which time the asyncAction
function's execution is halted. After the promises has resolved, the final value is returned to current scope and execution continues, as if the await
functions were synchronous.
This new functionality has taken promises to a whole new level for me. The async
function by itself, removes the need for repetitive Promise
declarations. Combined with await
, we get asynchronous code that is as easy to read as synchronous code. And no callbacks!
I can't decide whether this is a legitimate use for decorators, but I knew from the moment I saw this crazy syntax that this was want I wanted to achieve.
In a lot of my app projects, I chuck my actions into a series of classes which extend a simple class called Controller
. My old code for Controller
is below and as you can see, it exposes an empty array of actions.
class Controller {
actions() {
return [];
}
constructor(app = {}) {
this.app = app;
}
}
The idea is that in the extended class, you add 'action' functions, that you then list in the overridden array. See the example NotesController
below.
class NotesController extends Controller {
actions() {
return [
{ match: 'note', action: 'show' },
{ match: 'notes/create', action: 'create' },
{ match: 'notes/new', action: 'new' }
];
}
show(id) {
}
create(params, data, $form) {
}
new() {
}
doSomethingUseful() {
}
}
NotesController
now advertises which routes it's setup to listen to. Any function not listed n the array is ignored and assumed to be a helper method of some kind. This has alway felt a bit clunky, specifically I didn't like:
Glad you asked. I've basically ripped off the autobind
example from the Babel 5.0.0 blog post and created a new decorator called route
. Checkout the code for route
below.
function route(route) {
return function(target, key, descriptor) {
var fn = descriptor.value;
delete descriptor.value;
delete descriptor.writable;
if (!route) {
route = key;
}
descriptor.get = function() {
var bound = fn.bind(this, route);
Object.defineProperty(this, key, {
configurable: true,
writable: true,
value: bound
});
return bound;
};
if (!target.routes) {
target.routes = [];
}
target.routes.push({ match: route, action: key });
};
}
The differences between this decorator and the autobind
example are:
route
takes an optional parameter (also called route
), that allows you to specify the route to be matched. Optional, in that if missed out, the decorator assumes the name of the action, is also the route.route
param to the function, as it's often useful to know the route in the function.Let's see the new code:
// Controller Class
class Controller {
constructor(app = {}) {
this.app = app;
// In case no routes are specified
if (!this.routes) {
this.routes = [];
}
}
}
// NotesController Class
class NotesController extends Controller {
@route('note')
show(id) {
}
@route('notes/create')
create(params, data, $form) {
}
@route()
new() {
}
doSomethingUseful() {
}
}
You can see, no more actions
function, no more verbose listing of the functions. I've intentionally left out the value of the new
route, to demonstrate how the 'implied' routing works. If you run the code above in the Babel REPL%20%7B%0A%09return%20function(target%2C%20key%2C%20descriptor)%20%7B%0A%09%09var%20fn%20%3D%20descriptor.value%3B%0A%0A%09%09delete%20descriptor.value%3B%0A%09%09delete%20descriptor.writable%3B%0A%0A%09%09if%20(!route)%20%7B%0A%09%09%09route%20%3D%20key%3B%0A%09%09%7D%0A%0A%09%09descriptor.get%20%3D%20function()%20%7B%0A%09%09%09var%20bound%20%3D%20fn.bind(this%2C%20route)%3B%0A%0A%09%09%09Object.defineProperty(this%2C%20key%2C%20%7B%0A%09%09%09%09configurable%3A%20true%2C%0A%09%09%09%09writable%3A%20true%2C%0A%09%09%09%09value%3A%20bound%0A%09%09%09%7D)%3B%0A%0A%09%09%09return%20bound%3B%0A%09%09%7D%3B%0A%0A%09%09if%20(!target.routes)%20%7B%0A%09%09%09target.routes%20%3D%20%5B%5D%3B%0A%09%09%7D%0A%0A%09%09target.routes.push(%7B%20match%3A%20route%2C%20action%3A%20key%20%7D)%3B%0A%0A%09%7D%3B%0A%7D%0A%0Aclass%20Controller%20%7B%0A%0A%09constructor(app%20%3D%20%7B%7D)%20%7B%0A%09%09this.app%20%3D%20app%3B%0A%0A%09%09if%20(!this.routes)%20%7B%0A%09%09%09this.routes%20%3D%20%5B%5D%3B%0A%09%09%7D%0A%09%7D%0A%0A%7D%0A%0A%2F%2F%20NotesController%20Class%0Aclass%20NotesController%20extends%20Controller%20%7B%0A%0A%09%40route('note')%0A%09show(id)%20%7B%0A%0A%09%7D%0A%0A%09%40route('notes%2Fcreate')%0A%09create(params%2C%20data%2C%20%24form)%20%7B%0A%0A%09%7D%0A%0A%09%40route()%0A%09new()%20%7B%0A%0A%09%7D%0A%09%0A%09doSomethingUseful()%20%7B%0A%09%0A%09%7D%0A%0A%7D%0A%0Aconsole.log(new%20NotesController().routes)), you should get the output below:
[
{"match":"note","action":"show"},
{"match":"notes/create","action":"create"},
{"match":"new","action":"new"}
]
With the exception of new
, the array is identical to that of the first example. That be some nice ass syntactic sugar. The future rocks. Peace out.
Promise.resolve
, added info on Promise.all
.
There is so much I love about the functionality and syntax coming through under the banner of ES6. One such piece of functionality, is the 'Promise'. Promises are not something that needs to be transpiled, as of writing, all but IE and Opera Mini have support out of the box. The stragglers can be polyfilled quite easily.
What follows, are three tips for using promises more effectively.
When I first started playing with promises, I found myself nesting code blocks more than I would have liked. Code like:
class Example {
saveData(data) {
return new Promise(function(resolve, reject) {
// Save Data
resolve(data);
});
}
getFromWeb(id) {
return new Promise(function(resolve, reject) {
// Get from web
resolve(data);
});
}
display(id) {
let self = this;
return new Promise(function(resolve, reject) {
self.getFromWeb(id)
.then(function(data) {
self.saveData(data)
.then(function(data) {
// Display somewhere
resolve();
});
});
});
}
}
new Example().display(1);
Not very readable and not making great use of screen real estate, when you can actually do:
display(id) {
let self = this;
return self.getFromWeb(id)
.then(function(data) {
return self.saveData(data);
})
.then(function(data) {
// Display somewhere
return data;
});
}
The display
function is doing exactly the same, but now the functionality is chained. The second then
function deals with the display logic, before returning the data param, enabling the display
function to be chained itself:
new Example().display(1).then((data) => { /* Work on data */ console.log('async finished'); });
I'm one of those people who has never read a VCR manual. I pick up and do, realising only years later, that I didn't need to rush home every time I wanted to record something, because the VCR had a timer. I once wrote a really handy little function in SQL called VALUENULL
, for dealing with NULL values. I can't believe that sort of functionality wasn't built in, oh wait, ISNULL.
Well, I find myself in that place again. After triumphing that I'd come up with such a simple way to provide consistent Promise
returning functions with Util.emptyPromise
(see below), then worring that such a thing might be considered bad practice.
class Util {
static emptyPromise(val = null) {
return new Promise((resolve) => { resolve(val); });
}
}
You see, the point of the function is to wrap a value (or no value) around a prefab Promise
that always resolves. You would do this if you were creating a non-blocking/asynchronous API on top of synchronous code. Or if you envisaged blocking code becoming asynchronous in the future and wanted to ensure that the public API didn't feel the affect of such massive breaking changes.
A prime example of this, is when I recently wrote a data layer based on localstorage
(which is synchronous), then decided that localstorage
wasn't cutting the mustard, so replaced with localForage (which is Promise
based). That weekend is one I won't forget in a hurry.
My point is, Util.emptyPromise
is a less elegant equivalent to the already existing Promise.resolve. I'll leave this section with the original pun, because it still makes me chuckle.
The function is poorly named, because it can return a value. I just like the pun. An example of the pun in action:
class Election {
fullOf() {
return Util.emptyPromise()
.then(() => { return Util.emptyPromise(); });
}
}
You may want to check up my sleeves at this point, because I'm about to make bunnies appear out of thin air.
'Callbacks' is just something you do if you're writing non-blocking JavaScript. Calllbacks, within callbacks, within callbacks. Callbacks are there so that you can control the flow of some logic, which has a dependancy on asynchronous code (like an Ajax request), that will take you away from the main 'blocking' execution thread.
Promises take these callbacks and makes them look a lot prettier, while also providing a platform for deferring the attachment of callbacks. The following example still fires the console.log
, even though the callback is attached after the Promise
has already resolved.
var p = Promise.resolve();
p.then(function() { console.log('test'); });
But there is still room to make our code damn right gorgeous. Consider the following code:
function asyncFuncA() {
return new Promise(function(r) {
setTimeout(() => { r('asyncA'); }, 2000);
});
}
function asyncFuncB() {
return new Promise(function(r) {
setTimeout(() => { r('asyncB'); }, 1000);
});
}
class AsyncController {
render(template, data) {
return new Promise(function(resolve, reject) {
// Do render stuff
resolve({ t: template, d: data });
});
}
asyncAction(route) {
return asyncFuncA()
.then(function(a) {
return asyncFuncB()
.then(b => { return [ a, b ]; });
})
.then(data => { return this.render('route', data); });
}
}
let c = new AsyncController();
c.asyncAction()
.then((obj) => { console.log(`${obj.d[0]} + ${obj.d[1]}`); });
Looking at the asyncAction
. asyncFuncA
and asyncFuncB
are chained by calling asyncFuncB
within the callback of asyncFuncA
. The call to the render
function starts on a separate tree, consuming the response of both asynchronous functions. A rocky sort of waterfall.
asyncAction
--> asyncFuncA
--> asyncFuncB
--> render
We can achieve the same with the function below. The second asynchronous function no longer has a dependancy on the first, and we only have to call then
once.
asyncAction(route) {
return Promise.all([ asyncFuncA(), asyncFuncB() ])
.then(data => { return this.render('route', data); });
}
asyncAction
--> asyncFuncA
--> asyncFuncB
--> render
Pretty hot!
]]>What a numpty.
Why this happened, is probably a good subject for another post. What I want to talk about is how I resolved the issue.
The situation was that I had two branches; develop
, my intended branch and wrong-branch
, the branch I actually committed to. wrong-branch
was the product of bad practice on my part, luckily it was up-to-date with develop
, give or take a couple of small amends. wrong-branch
had itself a number of commits, in amongst merges from the develop
, that I didn't want merging back into develop
.
---------------- develop
\----\-----\--- wrong-branch
Ideally, I wanted to pick the very last commit on wrong-branch
and append it to end of develop
. The contents of the commit was mostly in isolation of the rest of the project, so I didn't expect any conflicts.
So, what did I do? Firstly, I took two precautionary steps:
wrong-branch
with develop
I wanted to reduce the risk of conflict, so I made sure wrong-branch
had the latest updates from develop
.
git checkout wrong-branch
git merge develop
develop
, in case anything went wronggit checkout develop
git checkout develop-tmp
Did I mention, that this is the first time I've attempted a cherry pick? In order to perform a cherry pick, you need the hash of the commit you want to grab. The hash will look something like d736fa95b41a36f5c59074afdbc773d60ca5a99b
, or the shortened version d736fa9
. You can get this from git log
.
git checkout develop-tmp
git cherry-pick d736fa9
The second line of the example above resulted in the following error:
... is a merge but no -m option was given.
The -m
option allows for a parent number. A commit's parent is essentially the commit's predecessor, usually the commit that spawned the current commit. On a branch, the parent number starts at 1, and increases as you go back along the tree. So, if you wanted to append the commit to the end of the branch, use '-m 1'.
Let's give it another go:
git checkout develop-tmp
git cherry-pick -m 1 d736fa9
Huzzah! It worked. If you had any conflicts at this point, now is the time to resolve and commit. Then, all that is left, is to merge into the primary develop
branch.
git checkout develop
git merge develop-tmp
git branch -D develop-tmp
The end.
]]>Watching 'the spoon' scene from The Matrix for the first time, it was one of those zen moments when the universe gets a little bit clearer. I suspect, if there is a true interpretation to the scene, mine is a little off. Essentially though, I believe that the spoon simultaneously represents a goal and the inability to achieve that goal. The spoon is there and you want to move it, but you can't, because you can't perceive it as anything more malleable than a spoon. The truth is, "there is no spoon". There is only your perception of a spoon. So for the spoon to bend, it is you who must bend.
The epiphany I had, was not that I can now bend spoons. I can't, at least I don't think I can. It's not say I'm not suspicious sometimes, but as far as I know, I'm not in The Matrix. I really like steak though, so I would I want to leave if I was? My epiphany, was that I see the spoon everyday.
I've been tempted to prattle on for the rest of this post about how I see the spoon in every facet of my life, which is true. But it was looking very much like an incoherent ramble. So I've tried to focus on how 'the spoon' affects my work as a software developer, and what I do to see past the spoon. I will attempt to drop references to 'the spoon', at the first available opportunity.
In my work, when I am working through a new project, I typically go through these stages:
I certainly hope for step 4, and I hope steps 1-3 aren't a million miles away from most other people. But how does one see through the spoon? On some projects by the time I start step 3, I've all but smashed my self confidence to pieces. Step 3 can sometimes be driven by fear that you might not live up to your own expectations and pigheadedness that you won't allow the project to fail!
So how do I get through step 2? These are the things I do, that keep me going through bouts of insecurity.
So it turns out that Mary Schmich, not Eleanor Roosevelt said "Do One Thing Every Day That Scares You". I never sit still when it comes to development. The moment I feel too comfortable, I become restless. I push myself to learn something new in every project I do. I'm not particularly self motivated when it comes to personal projects, so I've also always used work as my driver.
I went for a job interview once, sonn after I'd switched from VB to VB.NET. The Interviewer asked why I'd chosen VB.NET instead of C#. I explained that VB.NET seemed like the more obvious choice, because of it's simularities with what I already knew. He felt that it was all the more reason to dive into C#. I didn't get the job, but I did learn C#.
Visual Studio suddenly became incredible, around 2003. I used to joke that I could write a complete application without ever finishing a word. 'da, da, tab, da, da, tab'. VS was great and I loved working in it, but I was becoming increasingly scared of the outdoors. I started programming Classic ASP in Notepad, but now I wasn't sure it was even possible to compile my applications outside of VS. I was spoilt, too used to using Enterprise editions, to ever be happy in the free version. "I can't use Express, it doesn't support Solutions!"
My reaction, was to start messing around with Ruby, but in a text editor, rather than a full blown IDE. A very liberating experience. I've not looked back, it's probably been about 5 years since I worked with .NET in earnest.
There really isn't anything more important to the resolution of a task, than feeling like you own it. Without ownership, you make yourself powerless. The spoon will never bend, unless you own that mother hubbard.
The best example of ownership I can give you, is a plugin. Let's say, you've been asked to put an image slider into a website. You could build it from scratch, but client expectations outweigh their budget. Anyway, there really is no point reinventing the wheel; it feels like I come across a new slider plugin everyday. You implement the plugin. It's really cool, doing all this neat stuff, as if by magic. The client is really happy, but it's not working so great on IE 'whatever' under some obscure scenario, so you need to change it.
It's magic though, it's doing this thing here, and this thing there. It's crazy black magic and you're a fraud for even sticking it in the site, because you didn't create the magic, you pasted it in and hoped for the best. If you're thinking like this, it's safe to say, you're not really 'owning it'.
It's not magic, it's JavaScript, probably a jQuery plugin. You could swap out that plugin, that is definitely an option. Or, checkout the documentation to see if there is a setting you can tweak. But you know, sometimes you've just got to crack open the code and see what's going on. The investigation may just throw up some assumption, made by the plugin, that you can cater for in the outer project. But maybe you'll find yourself a bug that you can fix or report. Whether you're logging a bug, creating a pull request, or forking the whole plugin, you're owning it.
I'm not really comfortable with these sorts of posts, they tend to sound a bit preachy. It is something I feel quite strongly about though. Ultimately in development, as in life, the only obstacle is the one you create yourself. Change your perception of the task, take control of it and push yourself to take a different approach. There is no spoon.
]]>In this post I'm going to describe how you can make use of Babel's support for ES6 modules, and how you might consume them as NPM packages. It's pretty neato stuff and makes for very clean code. Read on!
TL;DR:Scroll down to the Star Wars reference for the actual tutorial.
NOTE:This tutorial has two Github repos, this one and this one.
Modules have existed in JS space for a while now. I've dabbled in the past, because I'm a big fan of results, I mean who wouldn't be?
I'm just going to pick up on that last point for a moment. While I dabbled, I never really embraced modules as part of a longer term strategy. My reluctance was due to inherent ugliness of implementation, with anything but Node's require
and exports
syntax. The ugliness is there to make these great ideas work in the browser.
Using the fantastic jQuery as an example, stuff like:
if ( typeof module === "object" && typeof module.exports === "object" ) {
// For CommonJS and CommonJS-like environments where a proper `window`
// is present, execute the factory and get jQuery.
// For environments that do not have a `window` with a `document`
// (such as Node.js), expose a factory as module.exports.
// This accentuates the need for the creation of a real `window`.
// e.g. var jQuery = require("jquery")(window);
// See ticket #14549 for more info.
module.exports = global.document ?
factory( global, true ) :
function( w ) {
if ( !w.document ) {
throw new Error( "jQuery requires a window with a document" );
}
return factory( w );
};
} else {
factory( global );
}
and
// Register as a named AMD module, since jQuery can be concatenated with other
// files that may use define, but not via a proper concatenation script that
// understands anonymous AMD modules. A named AMD is safest and most robust
// way to register. Lowercase jquery is used because AMD module names are
// derived from file names, and jQuery is normally delivered in a lowercase
// file name. Do this after creating the global so that if an AMD module wants
// to call noConflict to hide this version of jQuery, it will work.
// Note that for maximum portability, libraries that are not jQuery should
// declare themselves as anonymous modules, and avoid setting a global if an
// AMD loader is present. jQuery is a special case. For more information, see
// https://github.com/jrburke/requirejs/wiki/Updating-existing-libraries#wiki-anon
if ( typeof define === "function" && define.amd ) {
define( "jquery", [], function() {
return jQuery;
});
}
and
// Expose jQuery and $ identifiers, even in AMD
// (#7102#comment:10, https://github.com/jquery/jquery/pull/557)
// and CommonJS for browser emulators (#13566)
if ( typeof noGlobal === strundefined ) {
window.jQuery = window.$ = jQuery;
}
I get why it's all there, and I appreciate the efforts teams like jQuery put into compatibility with all of these different systems. I have benefitted from those efforts on many occasions. I bet it's a pain in the backside to maintain, it's very clever, but also, U-G-L-Y.
I was drawn back into the fold, as the result of a recent ES6 based project I've been working on. I was gorging on the beautiful ES6 class syntax, doing a fine job of controlling compilation through the use of sub folders.
As an example, the classes in directories 'controller' and 'model', inherit from directory 'base'. Classes in 'controller' can reference classes in 'model', but not the other way around.
root
base <-- Compile first
controller <-- Compile third
model <-- Compile second
`
js base/base_class.js
class BaseClass {
parent() {
console.log('something interesting');
}
}
``` js controller/app_controller.js
class AppController extends BaseClass {
action() {
let user = new UserModel();
console.log('I\'m an action');
}
}
`
js model/user_model.js
class UserModel extends BaseClass {
constructor() {
this.parent();
console.log('I\'m a model');
}
}
This all worked great, better than great, I was king of the world. Until I needed to create `BaseController`, that extends `BaseClass` and is extended by `AppController`.
``` js controller/base_controller.js
class BaseController extends BaseClass {
defaultAction() {
console.log('I\'m a default action');
}
}
`
js controller/app_controller.js
class AppController extends BaseController {
action() {
this.defaultAction();
}
}
Due to the dreaded alphabet, `AppController` compiles before `BaseController`. Arrrgh. Why world, would you treat me this way?!
``` bash
controller
app_controller.js <-- Attempts to compile first, but BaseController doesn't exist yet
base_controller.js <-- Waits patiently
Don't tell anyone, but my initial fix was to:
controller
0.base_controller.js <-- Compiles first
app_controller.js <-- Compiles second
I kidded myself for a while that this was a valid design decision, until maybe my third or forth 'zero dot' file. I needed a better way of controlling the order of compilation; it also felt like those base classes could be reused.
We're going to create two projects; the module and the consumer.
The module package will be written in ES6 JavaScript, but will need to be transpilied to ES5, for compatibility. So the ugliness is still there, just hidden. We'll use Gulp and Babel for the build.
I've created a directory called 'blog', in here I'm writing the following, in terminal:
mkdir es6-module
cd es6-module
npm init <-- Just enter through the defaults
mkdir src
touch gulpfile.js .gitignore
Your project should look like:
es6-module
src <-- This is where we\'re going to put our ES6
.gitignore <-- We\'ll need to ignore \'node-modules\', when this goes to GIT
gulpfile.js <-- Gulp build file
package.json <-- This was created when you typed in \'npm init\'
Make '.gitignore' look this:
`
text .gitignore
node_modules
Change the `main` option in 'package.json' to read './lib/index.js'. A 'lib' directory will be created as part of the build process, which will contain our ES5 code.
``` json
{
....
"main": "./lib/index.js",
....
}
main
is the entry point to our package. In a consumer, if you were to require('es6-module')
, you'll get the exports from the main
file.
We need a build script in our 'gulpfile.js'.
`
js gulpfile.js
var gulp = require('gulp'),
del = require('del'),
babel = require('gulp-babel');
var SRC_PATH = './src', LIB_PATH = './lib';
gulp.task('clear', function(cb) { del([ LIB_PATH + '/*' ], function() { cb(); }); });
gulp.task('build', [ 'clear' ], function() { return gulp.src([ SRC_PATH + '/*/.js' ]) .pipe(babel({ blacklist: [ 'useStrict' ] })) .pipe(gulp.dest(LIB_PATH)); });
gulp.task('default', function() { gulp.start('build'); });
The script has three dependancies:
1. Gulp - The script runner. Like [Grunt](http://gruntjs.com/), but code first.
2. [Del](https://www.npmjs.com/package/del) - A little package for deleting stuff.
3. Babel - ES6 transpiler. Reinvigorated my already deeply unnatural love of JavaScript. Hallelujah.
Install the dependancies like so:
``` bash
npm install -g gulp babel
npm install --save-dev gulp del gulp-babel
I think the clear
task is self explanatory, so lets talk about build
. Typically in a build script, it's tempting to concatenate, but our package is going to benefit from keeping the code in separate files. By keeping the code in separate files, modular, we'll be implementing JavaScript module benefit #2 'Only load what you need'.
The code itself is transpiled through Babel, to create the ES6 code in 'lib'. I've blacklisted 'useStrict'. I do this by default, because "use strict"
can stop execution in iOS UIWebViews, specifically when using Cordova.
In the src directory, create the following files:
src
clever_class.js <-- An example module
index.js <-- Our main file
`
js src/clever_class.js
export class CleverClass {
constructor() {
console.log('I\'m a clever class');
}
}
``` js src/index.js
export * from './clever_class';
I think you can already see how useful our new package is going to be.
CleverClass
is pretty unexceptional, except for the addition of export
before the class
declaration. export
tells Babel that we want to reference CleverClass
as module.
The code in 'index.js' is really interesting. We're literally creating an index to all modules in our package, that we want made public. export * from
(not import
), re-exports CleverClass
as part of 'index.js'.
Think about the implications here. You can have twenty different classes in this directory, all extending each other in different and exciting ways. From 'index.js', you choose which of those classes make it to your public API. CleverClass
may inherit from a class called BaseClass
, but only CleverClass
is accessible, even though CleverClass
still benefits from the existence of BaseClass
.
At this point, you're starting to feel like Skeletor, just before he was robbed of the powers of Grey Skull.
Okay, build the mutha:
gulp build
Any errors? No, great. You should now have a 'lib' directory that mirrors the structure of 'src', just with ES5 code, instead of ES6.
NOTE:This feels a bit 'fly-by the seat of your pants' coding. Usually I'd have a test suite in the project, to ensure that we're all rocking in the right direction. However, we're about to build a consumer for exactly that, and for the purposes of this tutorial I wanted to keep concerns clean and avoid duplication. You dig?
The purpose of this tutorial is to demonstrate how you can consume ES6 modules, contained within an NPM package. To do this, we need a separate project, from which to consume the package; this is that project.
From the blog directory:
mkdir module-consumer
cd module-consumer
npm init <-- Just enter through the defaults
mkdir app
touch gulpfile.js .gitignore
Here is our '.gitignore':
`
text .gitignore
.web
node_modules
Here is our directory structure:
``` bash
module-consumer
app <-- This is where we\'re going to put our test app
.gitignore <-- We'll need to ignore \'node-modules\', when this goes to GIT
gulpfile.js <-- Gulp build file
package.json <-- This was created when you typed in \'npm init\'
Our test app is going to be a very simple website, so we're going to need a web server, in this case Connect. Because we're using a website as our testbed, we need to a way to consume the NPM package in a way that the browser understands; for this, we will use Browserify.
`
js gulpfile.js
var gulp = require('gulp'),
connect = require('gulp-connect'),
del = require('del'),
watch = require('gulp-watch'),
runSequence = require('run-sequence'),
babelify = require('babelify'),
browserify = require('browserify'),
source = require('vinyl-source-stream');
var APP_PATH = './app', WEB_PATH = './.web';
gulp.task('clear', function(cb) { del([ WEB_PATH + '/*' ], function() { cb(); }); });
gulp.task('js', function() { return browserify({ entries: APP_PATH + '/app.js', debug: true }) .transform(babelify) .bundle() .pipe(source('app.js')) .pipe(gulp.dest(WEB_PATH)); });
gulp.task('index', function() { return gulp.src([ APP_PATH + '/index.html' ]) .pipe(gulp.dest(WEB_PATH)); });
gulp.task('connect', function(cb) { connect.server({ root: WEB_PATH, livereload: true });
cb();
});
gulp.task('livereload', function () { return gulp.src( WEB_PATH + '/*/' ) .pipe(connect.reload()); });
gulp.task('serve', [ 'clear' ], function(cb) { runSequence( [ 'js', 'index' ], 'connect', function() { watch([ APP_PATH + '/app.js' ], function() { gulp.start('js'); }); watch([ APP_PATH + '/index.html' ], function() { gulp.start('index'); }); watch([ WEB_PATH + '/*/' ], function() { gulp.start('livereload'); });
cb();
}
);
});
The script has these dependancies:
1. Gulp
2. Gulp Connect - Our web server.
3. Del
4. [Gulp Watch](https://www.npmjs.com/package/gulp-watch) - Kicks off Gulp tasks, when a file changes.
5. [Run Sequence](https://www.npmjs.com/package/run-sequence) - Asynchronous task management. [Read my blog](/blog/2015/03/23/in-the-name-of-gulp/).
6. [Babelify](https://github.com/babel/babelify) - Babel transformer for Browserify.
7. Browserify - Makes Node's `require` work in the browser.
8. [Vinyl Source Stream](https://www.npmjs.com/package/vinyl-source-stream) - Makes Browserify work with Gulp.
Install them:
``` bash
npm install --save-dev gulp gulp-connect del gulp-watch run-sequence babelify browserify vinyl-source-stream
Here's a quick rundown of the tasks in this script:
Transpiles and concatenates the contents of 'app/app.js' (not created yet), using Browserify. Browserify follows every require
, creates a virtual tree, then bundles all the code in one file.
I mean, wow, just wow.
We're not using the require
syntax though, so we need Babelify. Babelify transforms/transpiles the ES6 syntax to ES5, for Browserify to understand.
The result of which is outputted to our temporary web directory ('.web', which doesn't exist yet).
Moves 'app/index.html' to '.web/index.html'. You don't want to be working directly in '.web'.
Uses Connect to start a web server, with Live Reload.
Reacts to file changes. Live Reload reloads your browser programmatically. It's pure magic.
This is what we type into terminal. It's a 'stitch everything together task'. We use Run Sequence to run our two compilation tasks, js
and index
, before kicking off the web server task connect
. Finally, we set off the file watchers, that react accordingly to file changes.
I'm going to start by boilerplating 'index.html' in the 'app' directory; the sole point of this file is to load 'app.js'.
`
html app/index.html
<!DOCTYPE html>
Here's 'app.js'.
``` js app/app.js
// Example 1: Namespace
import * as es6 from 'es6-module';
new es6.CleverClass();
// Example 2: Choose exports
// import { CleverClass } from 'es6-module';
// new CleverClass();
// Example 3: Target individual files
// import { CleverClass } from 'es6-module/lib/clever_class';
// new CleverClass();
'app.js' contains three examples of how can access 'CleverClass' from our first project... Aww crap, hang on a minute, we've not actually referenced our 'es6-module' package!
npm install --save-dev ../es6-module
NOTE:NPM allows you to install local packages, that's what going on in the code above.
What was I saying? Right, three examples. They should all have the same result, but show the flexibility was the ES6 way of doing modules:
as
, you can wrap your imports in a namespace. Very tidy.Run the server and breath in the sweet, sweet smell of success.
gulp serve
I accept the payoff is a little underwhelming. If all is well, when you open you dev tool in a browser, pointed at http://localhost:8080, you should see:
I'm a clever class
That's not the point. The point is, "I'm a clever class" was written in module in one package, and accessed from a script in another. All the code was written in ES6, and only the files needed, were accessed in the test site.
We've gained:
We. Are. Awesome.
]]>NOTE:You see the code over at Github.
Let's discuss the problem first. Until recently, Gulp and Cordova were two separate Node based, command line powered worlds to me, with seemingly nothing in common. In the given scenario, I'd typically have a two directory structure:
app <-- Source files for the project
cordova <-- Cordova root directory
- www <-- Cordova app directory
gulpfile.js
Gulp would take care of transpiling the code in the app
directory and transferring the spoils to cordova/www
. Cordova is then responsible for building the Cordova project and delivering the app to an emulator. Something like:
gulp
cd ./cordova
cordova emulate
cd ../
Before switching to Gulp, I used to use Middleman for a lot of the transpiling tasks, where I'd maintain a number of bash scripts to create the illusion of cohesion. It didn't feel right when I switched to Gulp though. There must be some similarity between these disparate Node based, command line tools. What was I missing?
You know what I realised? That Gulp is based on Node and so is Cordova; so I can probably access Cordova directly from within my Gulp task. It's never going to be that easy, is it?
Well, it'll be a disappointment if it wasn't that easy. So long story short, it almost is. To demonstrate, I'm going to cook up a quick project, to demonstrate the integration:
npm init
touch gulpfile.js
npm install -g cordova gulp ios-deploy
npm install --save-dev gulp cordova-lib del
cordova create ./cordova me.k3r.cordgulp CordovaGulp
Accept all the defaults on npm install
, if you're not sure how to answer. All it does is create your package.json
and settings can be easily changed at any time.
The -g
means install globally, and the --save-dev
will save the packages as development dependancies within the package.json
. Have a look, you'll see what I mean.
ios-deploy is neat if you're on a Mac and want to deploy from script or command line to iOS.
The last line scaffolds a basic Cordova project.
Paste in the following to your newly created gulpfile.js
, but don't run anything yet!
var gulp = require('gulp')
del = require('del'),
cordova = require('cordova-lib').cordova.raw;
var APP_PATH = './app',
CORDOVA_PATH = './cordova/www';
gulp.task('del-cordova', function(cb) {
del([ CORDOVA_PATH + '/*' ])
.then(function() {
cb();
});
});
gulp.task('compile', [ 'del-cordova' ], function(cb) {
return gulp.src([ APP_PATH + '/**/*' ])
.pipe(gulp.dest(CORDOVA_PATH));
});
gulp.task('build', [ 'compile' ], function(cb) {
process.chdir(__dirname + '/cordova');
cordova
.build()
.then(function() {
process.chdir('../');
cb();
});
});
gulp.task('emulate', [ 'compile' ], function(cb) {
process.chdir(__dirname + '/cordova');
cordova
.run({ platforms: [ 'ios' ] })
.then(function() {
process.chdir('../');
cb();
});
});
If you ran anything at the point, you'd replace the default Cordova www
directory with that stark emptiness of your nonexistent app
directory. Remedy that with the following, which moves the contents of cordova/www
to app
.
mv ./cordova/www ./app
You now have the almost complete example. If you type in gulp compile
, cordova/www
will be recreated with the contents of app
. Nothing else is going on here at the moment, but think of the possibilities.
We haven't quite finished yet. Type in the following, to add iOS and/or Android as platforms to your new project.
cd cordova
cordova platform add ios
cd ../
While you're in the cordova
directory, you could have also run cordova build
or cordova emulate ios
, but that's for losers.
Within in the project root, run either of these bad boys:
gulp build
gulp emulate
That's right, one command to rule them all. gulp emulate
transpiles the code, moves it to cordova/www
then kicks off the Cordova build
and emulate
commands.
"But how does this sorcery work?" I hear you cry. Cordova developers will mostly recognise Cordova's NPM package as a command line tool, but as such a package, we should also be able to require it within a Node script (or in this case, Gulp). The reference here, cordova = require('cordova-lib').cordova.raw
, provides access to the Cordova's underlying API, exposing stuff like build
and emulate
.
It's not all unicorns mind; the API has auto-detection routine in place that works out the project's root directory. This only works however, if you're within Cordova's project structure. I'm positive this can be overcome by 'cleaner' methods of API abstraction, but for the moment I've circumvented the issue by introducing two calls to process.chdir
. chdir
changes the working directory of running script. The second call resets the directory, for the purposes of possible task chaining.
See here:
gulp.task('emulate', [ 'compile' ], function(cb) {
process.chdir(__dirname + '/cordova');
cordova
.run({ platforms: [ 'ios' ] })
.then(function() {
process.chdir('../');
cb();
});
});
run
. emulate
is an alias for run
.run
process completes, the directory is reset.So there you have it, in a single Gulp command you can, transpile, populate, build and emulate. For me, this little nugget has sped up my workflow, and has made the build task more approachable to other developers working on the project.
UPDATE 13/04/2015: Updated example to use latest del
syntax.
NOTE: You see the code over at Github.
The release of Gulp 4 is right around the corner, but you can already use it on the 4.0 branch. Here is why you should.
When Grunt gained popularity, I was excited by the premise, but underwhelmed by the execution. I feel this is more due to a deficiency on my part, rather than an actual problem with Grunt, given the team behind it. Just looking at the Getting Started page causes static to course through my brain. #brains
This feeling of inadequacy stayed with me until I found Gulp. Gulp's barrier to entry seems a lot lower than Grunt's, it's really a tool you can just run with. Now I feel great about myself, now that I've found 'my people'. Amen brothers and sisters, this is the house of Gulp.
It's not all roses in the garden of Gulp 3 though, well maybe it is, but those roses have thorns. And those thorns all bare the words 'async callbacks'. If you've ever considered Gulp tasks to be modular, building blocks of larger tasks, then you've probably faced the same disappointment that I have, that they're not.
Take this simple gulpfile:
var gulp = require('gulp')
sass = require('gulp-sass'),
babel = require('gulp-babel'),
del = require('del');
var DEST = './dest',
SRC = './src';
gulp.task('clean', function(cb) {
del(DEST, cb);
});
gulp.task('stylesheets', function() {
return gulp.src(SRC + '/app.scss')
.pipe(sass())
.pipe(gulp.dest(DEST));
});
gulp.task('javascripts', function() {
return gulp.src(SRC + '/app.js')
.pipe(babel({ blacklist: [ 'useStrict' ] }))
.pipe(gulp.dest(DEST));
});
gulp.task('html', function() {
return gulp.src(SRC + '/app.html')
.pipe(gulp.dest(DEST));
});
gulp.task('default', [ 'clean', 'stylesheets', 'javascripts', 'html' ], function() {
});
The script above takes the contents of src
and sticks it in dest
. There is a problem with the script above, that becomes apparent when you check the output:
[22:02:10] Starting 'clean'...
[22:02:10] Starting 'stylesheets'...
[22:02:10] Starting 'javascripts'...
[22:02:10] Starting 'html'...
[22:02:10] Finished 'clean' after 23 ms
[22:02:10] Finished 'javascripts' after 48 ms
[22:02:10] Finished 'html' after 45 ms
[22:02:10] Finished 'stylesheets' after 55 ms
[22:02:10] Starting 'default'...
[22:02:10] Finished 'default' after 12 μs
Look at the fifth entry, it's the clean
task finishing after 23 milliseconds, after all the other tasks have already started. So the clean script is still deleting stuff after the other tasks have started moving their stuff across.
gulp.task('default', [ 'clean' ], function() {
[ 'stylesheets', 'javascripts', 'html' ].forEach(function(taskName) {
gulp.start(taskName);
});
});
With the default
task above, the clean
task will complete before any other task starts; no more conflict. Thing is though, looking at the output, the default
task is the first to finish after clean
. Because Gulp tasks are asynchronous (non blocking), the default
task has no reason to hang around waiting for all the tasks in the forEach
to complete; the code is only interested in starting each task. This isn't a big deal in our example, but what if you then needed to add a third step?
gulp.task('build', [ 'clean' ], function(cb) {
[ 'stylesheets', 'javascripts', 'html' ].forEach(function(taskName) {
gulp.start(taskName);
});
cb();
});
gulp.task('deploy', [ 'build' ], function() {
console.log('deploy!');
});
gulp.task('default', [ 'deploy' ], function() {
});
Check out the output:
[22:28:20] Starting 'clean'...
[22:28:20] Finished 'clean' after 8.59 ms
[22:28:20] Starting 'build'...
[22:28:20] Starting 'stylesheets'...
[22:28:20] Starting 'javascripts'...
[22:28:20] Starting 'html'...
[22:28:20] Finished 'build' after 10 ms
[22:28:20] Starting 'deploy'...
deploy!
[22:28:20] Finished 'deploy' after 59 μs
[22:28:20] Starting 'default'...
[22:28:20] Finished 'default' after 2.89 μs
[22:28:20] Finished 'html' after 42 ms
[22:28:20] Finished 'javascripts' after 45 ms
[22:28:20] Finished 'stylesheets' after 52 ms
The deploy
task finishes before the build
tasks have completed, which is obviously not ideal!
I had expected to find that the start
function would support a callback or even an event emitter. That being the case, we could use something like async (a neat package for dealing with asynchronous code) to do something like:
async
.eachSeries(
[ 'stylesheets', 'javascripts', 'html' ],
function(taskName, callback) {
gulp.start(taskName, function() { callback(); });
// or
// gulp.start(taskName).on('end', callback);
},
function(err) {
cb()
}
);
But alas, not. The start
function is fire and forget. In the example above, crazy stuff happens in the output:
[20:34:15] Starting 'clean'...
[20:34:15] Finished 'clean' after 8.25 ms
[20:34:15] Starting 'build'...
[20:34:15] Starting 'stylesheets'...
[20:34:15] Finished 'stylesheets' after 25 ms
What you need, is an unassuming, wicked little plugin called run-sequence. Using 'run-sequence', you can do something like:
gulp.task('build', [ 'clean' ], function(cb) {
runSequence(
[ 'stylesheets', 'javascripts', 'html' ],
cb
);
});
You can see from the output that we get exactly what we want:
[20:41:34] Starting 'clean'...
[20:41:34] Finished 'clean' after 8.18 ms
[20:41:34] Starting 'build'...
[20:41:34] Starting 'stylesheets'...
[20:41:34] Starting 'javascripts'...
[20:41:34] Starting 'html'...
[20:41:34] Finished 'html' after 44 ms
[20:41:34] Finished 'stylesheets' after 54 ms
[20:41:34] Finished 'javascripts' after 49 ms
[20:41:34] Finished 'build' after 56 ms
[20:41:34] Starting 'deploy'...
deploy!
[20:41:34] Finished 'deploy' after 81 μs
[20:41:34] Starting 'default'...
[20:41:34] Finished 'default' after 3.88 μs
'run-sequence' is cool, but there is a better way.
Gulp 4 uses undertaker for task management. This is significant because 'undertaker' supports the chaining of series and parallel tasks. In order to make use of this functionality, you need to install the prerelease version of Gulp, which is easily done by following this guide.
You can see examples of series and parallel functionality, here, but check this out:
gulp.task('build', gulp.series('clean', 'stylesheets', 'javascripts', 'html'));
gulp.task('deploy', gulp.series('build', function(cb) {
console.log('deploy!');
cb();
}));
gulp.task('default', gulp.series('deploy'));
The difference here is that the dependancies array and callback have been replaced with chain-able series
functions. You can see from the output below that, while the deploy
task appears to start too early, the console.log
demonstrates that the meat and veg of the task runs when it needs to.
[21:39:29] Starting 'default'...
[21:39:29] Starting 'deploy'...
[21:39:29] Starting 'build'...
[21:39:29] Starting 'clean'...
[21:39:29] Finished 'clean' after 8.95 ms
[21:39:29] Starting 'stylesheets'...
[21:39:29] Finished 'stylesheets' after 17 ms
[21:39:29] Starting 'javascripts'...
[21:39:29] Finished 'javascripts' after 32 ms
[21:39:29] Starting 'html'...
[21:39:29] Finished 'html' after 2.98 ms
[21:39:29] Finished 'build' after 62 ms
[21:39:29] Starting '<anonymous>'...
deploy!
[21:39:29] Finished '<anonymous>' after 222 μs
[21:39:29] Finished 'deploy' after 63 ms
[21:39:29] Finished 'default' after 65 ms
To sum up. Gulp 4 is a huge step forward in terms of task management. I've had no problems with v4 so far, but if you need to hang with v3 for a little while longer, 'run-sequence' is a good option.
]]>DRM exposes the shame that we don't really own anything. Buy "A New Hope" of VHS, then wait for the format to become obsolete. Buy it DRM through iTunes, has it really got any less of a shelf life. How many times have we bought the original Star Wars trilogy (to own), throughout our lives. I've bought it three times, twice on VHS, once on DVD. I will not replace the DVDs, unless the movie industry goes DRM free or Disney release an irresistible boxset. Ah, shit.
At first, I took the approach with comics, that I still take with my digital movie collection, that DRM is a fact of life. If you want to see the movie, it's either DRM or a hard copy. I don't want a hard copy. As a side issue, this is why I have not seen Twelve Monkeys in such a long time (not available in the UK to stream, is never on the TV).
With comics, there is a third option that has just become a lot more popular. DRM-free comics. Look, I'm not saying they're new, just that your options have recently increased, albeit at the sacrifice of other ethical issues. What am I talking about? Comixology, a firm favourite of mine for reading digital comics, made the monumental step of allowing creators to offer their comics DRM-free. This is a truly amazing step forward in my view. So, what about the ethics? Man it's annoying that they've been taken over by (the UK tax dodging poster boys) Amazon.
I hate that I'm so invested in this company.
Let's save my ethical crises for another post. It's unlikely that Marvel or DC are going to be clicking the "DRM-free" button anytime soon. Dark Horse have their own service which is still disappointingly DRM protected. All of which means that if you want a DRM-free comic book library, you have to look at other publishers.
2000 AD have been selling their entire digital catalogue, DRM-free for a good while now, and they deserve nothing but praise for it. Because of this, I've rekindled my love of their periodical, as well as discovering some real classic gems. Halo Jones is the one that comes immediately to mind. If I have one gripe with 2000 AD, it's its clunky checkout process. No ability to save card details or use Paypal. If the process of buying comics was easier, I'd regrettably be spending more. I know this is true because of in spite of my ethical crises, the Comixology checkout process is smooth like butter. So smooth, it's a little unsettling at times.
I don't just read DRM-free comics. I read a few titles, like Powers, Hellboy & Atomic Robo that are so far not free of their DRM shackles. Here though, I end this post with some DRM-free recommendations.
Storage in general is a bit of a tricky one in hybrid development. There are three main types of storage (excluding bespoke implementations and filesystem) you potentially have access to in a web based development:
Until recently, my "go to guy" for app storage. localStorage is really easy to use key-value storage, that is at the time of writing, the only consistent cross-platform storage mechanism. The problem with localStorage though, is that you typically only get access to 5MB. This has always been sufficient for my needs in the past, but you can't help thinking that's a scalability problem waiting to happen. The limit speaks to the intended use for this sort of storage; if you've a lot of data, look somewhere else.
localStorage.setItem('key', 'value');
WebSQL is an implementation of Sqlite, which is great, because I love Sqlite. What is not so great is that there is no support for IE/Firefox, and non seemingly on the horizon. I suspect it's due to the lack of involvement from two major vendors, that W3C ceased working on the specification in November 2010. No more to be said.
var database = openDatabase('testDB', '1.0', 'Test Database', 1024 * 1024);
database.transaction(function (transaction) {
transaction.executeSql('CREATE TABLE IF NOT EXISTS entries (id INTEGER PRIMARY KEY, value VARCHAR)');
transaction.executeSql('INSERT INTO entries (value) VALUES ("value")');
});
IndexedDB has gained greater platform support of late. Even so, with support having only just been implemented in iOS & Android, legacy support is an issue. The Blob support is really interesting. But actually, beyond what I've read, I don't know too much about IndexedDB, I've never really used it.
NOTE: I went hunting around for an example for IndexedDB, the best article I came across was this one. Wow, is IndexedDB long winded or what?!
So, which storage mechanism am I using? Well, I'm probably using IndexedDB in most cases. Ehh? I recently had call to convert a Cordova app to a Chrome app. The app in question was using localStorage. Trick is, Chrome apps don't support standard localStorage. They have their own version (called chrome.storage) that is very similar to localStorage, but is asynchronous nature. I didn't really want to rewrite the whole data layer specifically to work with Chrome app, but I found the idea of making it asynchronous appealing . Maybe it was time to break my reliance on localStorage.
I found localForage, a Mozilla library that wraps localStorage, WebSQL and IndexedDB into asynchronous localStorage API. Perfect! The library basically uses whatever is available; You can even set or of precedence and write your own adapters (I'm thinking chrome.storage.sync).
Below is a fragment of the code I've converted to use localForage. The JavaScript is written using ES6.
class InternalStorage {
constructor(key) {
this._storage = localStorage;
}
_serialize(data) {
return JSON.stringify(data);
}
_deserialize(value) {
return JSON.parse(value);
}
_getIndexKey() {
return this._key + '-index';
}
getIndex() {
var value = this._storage.getItem(this._getIndexKey());
if (value) {
return this._deserialize(value);
}
else {
return [];
}
}
setIndex(array=[]) {
var obj = this._serialize(array);
return this._storage.setItem(this._getIndexKey(), obj);
}
}
Here is the converted code, using localForage:
class InternalStorage {
constructor(key) {
this._storage = localforage;
}
_serialize(data) {
return data;
}
_deserialize(value) {
return value;
}
_getIndexKey() {
return this._key + '-index';
}
getIndex() {
var self = this;
return new Promise(function(resolve, reject) {
self._storage.getItem(self._getIndexKey())
.then((value) => { resolve(self._deserialize(value)); });
});
}
setIndex(array=[]) {
var obj = this._serialize(array);
return this._storage.setItem(this._getIndexKey(), obj);
}
}
The things to notice with the transition to localForage are:
setIndex
.getIndex
in a Promise, so I can keep my _serialize
method in place. Well, you never know.ES6 is a big thing for me at the moment, so the fact that localForage supports ES6 compliant promises, was very appealing. The ability to write additional adapters adds future proofing. My one gripe, which isn't an issue with localForage, is that we don't have a robust solution for relational storage in our web based development at the moment.
]]>perch/templates/pages/attributes/seo.html
:
<perch:pages id="description" label="Description" type="textarea" size="xs" escape="true" count="chars" />
If you're not already familiar with how to implement Page Attributes, I urge you to check out Perch's docs. The implementation is simple, and as the builtin example suggests, very effective for SEO.
This is a somewhat contrived example, but should hopefully demonstrate the flexibility that page attributes add to Perch. Imagine a website that contains a list of projects. The home page contains a list of the titles of those projects, and a link to view more information. The list is generated using the perch_pages_navigation function.
<?php perch_pages_navigation(array( 'from-path' => '*' )); ?>
We'd like the list to include a thumbnail and a small excerpt of the project description. I've already provided a tutorial on a flexible approach for achieving this, but perhaps it's a bit overkill for the immediate needs of the client. With Page Attributes we can fresh out our index page, with an image and an excerpt, with very little effort.
Adding the following to perch/templates/pages/attributes/default.html
:
<perch:pages id="image" label="Image" type="image" />
<perch:pages id="excerpt" label="Excerpt" type="textarea" />
Adds two additional fields in the Page Details section of all pages.
This new content is saved at a page level, so it can now be exposed in our index page using our existing perch_pages_navigation
implementation. By modifying 'perch/templates/navigation/item.html' to the following:
<perch:before>
<ul>
</perch:before>
<li<perch:if exists="current_page"> class="selected"</perch:if><perch:if exists="ancestor_page"> class="ancestor"</perch:if>>
<a href="<perch:pages id="pagePath" />">
<img src="<perch:pages id="image" />" alt="<perch:pages id="pageNavText" />">
<h1>
<perch:pages id="pageNavText" />
</h1>
<p>
<perch:pages id="excerpt" label="Excerpt" type="textarea" />
</p>
</a>
<perch:pages id="subitems" encode="false" />
</li>
<perch:after>
</ul>
</perch:after>
The outputted HTML of our index page would resemble:
<ul>
<li>
<a href="/industries/project-1.php">
<img src="/perch/resources/project-thumb.jpg" alt="Project 1">
<h1>
Project 1
</h1>
<p>
Project Excerpt
</p>
</a>
</li>
</ul>
You can see how quickly we can expose, and gain access to, page level content with Page Attributes. This technique may well fit the bill for you immediate requirements. Before committing to this course of action though, see my previously mentioned tutorial, you should be aware of the following aspects of Page Attributes:
So there you are, Page Attributes. A pretty neat way to add a bit more oomph to your index pages.
]]>The thing is, I love preprocessors. I love them so much that I'm currently brewing a series of posts extolling the virtues of preprocessors. Then I'm faced by a post by A List Apart (a site I have a lot of respect for), apparently prophesying the demise of these little wonders. I couldn't let the post go, what if they were right?
After all, its up to all of us to constantly question everything we believe to be true. Right? Right.
The arguments set forth for the problems with preprocessors are:
Blaming the use of a preprocessor on the coder's own bad habits, is like blaming traffic cameras for speeding. To get the best out of a tool, you must use it correctly. I'm no shining example of this! I frequently try to nail pictures to the wall with a screwdriver; I'm forever using an unnecessary amount of nesting in my Sass.
You can't deny 'lock-in' effort of adding a preprocessor to a project. In much the same way, you could argue that choosing to write an application in a language like Ruby), locks the future development to the that language over say the base languages that Ruby is built on, like C or Java.
Well, that's not quite right, you could write a native extension for Ruby in C, if you wanted. Actually, this is why I use the SCSS syntax of Sass over SASS, and why I favour 6to5 over CoffeeScript. In both examples you can make use of all the wonderful syntactic sugar that the preprocessors provide, or ignore all of it and write in the base syntax. Let us not forget that we're never ever actually locked in by a good preprocessor, as its sole purpose is to generate the base syntax, so you can opt out at any time. "Hey man, I don't need your syntactic goodness anymore, I'm going to carry on with the base file".
A couple of issues are raised by my last paragraph:
I love Ruby for the same reason that I don't like CoffeeScript. This is a weirdness that actually Lyza (the original post's author) will be helping me with later in this post. I have no interest in writing a line of C. I adore JavaScript and have never warmed to CoffeeScript as an alternative.
Why would you ever just use the base syntax, if you'd gone to the trouble of adding a preprocessor in the first place? I'm completely non-dismissive of Lyza's arguments about the 'lock-in' effect, or as I've always thought of 'barrier to entry' or 'learning curve'. I work in a very talented dev team. We each come from different technological backgrounds and have our own preferences for tooling & technology. If I'm going to add a dependancy to the project, it better not be at the expense of a co-worker's ability to jump in.
Its at the point where Lyza starts talking about post-processors, that my angst starts to wain. I'd never heard of PostCSS or Myth and I'm pretty excited about both. I use Compass a lot; the moment that hooked me was realising that I didn't have to hack around with nonsense bits of CSS, to add cross browser support for inline-block
. Maybe though its time for a slightly different approach, in much the same way that I write ES6 compliant JavaScript and have a "preprocessor" called 6to5 convert it into something most browsers can work with (I realise I'm talking about ES5). Perhaps I should be writing compliant CSS3 and have one of these "post-processors" add all the "make it work in older browser" stuff.
_Note: I've used some of those double quotes divisively, I wonder if you noticed. It seems to me that the use of "pre" and "post" refers to the point at which the code is compliant to a standard. So, 6to5 is actually a post-processor. I went to the site for clarification. 6to5 actually refers to itself as a transpiler, which is a marvellous way of avoiding a distinction that I care very little about._
In summation, Lyza's point (I believe) is that transpilers aren't there to cover up poor code, that diversity in one's approach (especially within a team) is good. If I'm right, then I agree with her wholeheartedly in both regards. I don't agree with the negative connotations of so-called preprocessors & post-proprocessors to get to these points though.
For my own part, Lyza's post is identified the different tacks I have been taking in my usage of transpilers in JavaScript and CSS. Its time to harmonise, where possible, starting with standards code and have the transpiler make it dirty. I won't be giving up my nesting anytime soon though.
]]>You pick up a book, you read a chapter or two, stick a bookmark in, then set it down on you bedside table. The next time you pick that book up, you go straight to the bookmark and continue reading. I haven't done that for a while mind. I'm a hoarder by nature, without the space to truly exercise the talent. So I made the decision a few years ago to go digital with all of my printed media, books, magazines and comics. It makes my inability to throw anything away, less of an issue.
The kindle makes reading a book very easy as a digital medium. I was more sceptical about whether I'd get on with comics on a bright iPad screen, but the transition took and I'm almost exclusively digital now. Don't get me wrong, paper is best - there is nothing like the smell of an old book - but digital is a good compromise. The only real issue I have with digital (as format) is that I am always scrabbling for something to read in the bath.
Okay, I'm slowly getting off the point. The point I am trying to make is that, it took my a little while to buy into comics in a digital format, but once I was there, my appetite became insatiable. "Need more input."
My taste in comics has changed quite a lot over the years. At the peak of my print comic reading, it was almost exclusively DC/Marvel cape books. Now, I read almost (there's that word again) no superhero books; my tastes are more fringe. I'm looking for stories that surprise me in some way.
Enter web comics. Thank goodness, I thought we'd never get there.
Not a totally altruistic endeavour on the part of the creator; web comics are a great way to demonstrate a new talent or get different ideas out there. Regardless of motive, 'free to read' material is a win for the reader. It has led me personally to discover creators that I may not have found otherwise, and to purchases of those creator's commercial offerings. A memorable example of this happening, was after finding Friends With Boys by Faith Erin Hicks, which led me to discovering books like The War at Ellsmere and Adventures of Superhero Girl.
With Friends With Boys, I started reading roughly halfway through the run. Having so much content already available got me hooked. I obsessively finished the rest of the story, ensuring to log on to the site for new content, the moment it was published. In this instance web comics really worked for me.
In other instances, web comics have also worked for me in the short strip format, like with Oglaf (NSFW). Oglaf works, because a) the name is very easy to remember and b) the stories are very short and funny (and rude).
I'm discussing virtues, when I should be discussing problems.
##Discoverability Really the only reliable way I've had to find new web comics is by recommendation through io9. I've attempted to find other sources, but all too often the sites are outdated in style and content.
##Bookmarks If I'm reading a purchased comic, I'll use Comixology or Comic Zeal. These apps are my "easy reach", they're my bookmark.
Some web comics are published in blogs, some in more purpose built sites. Some web comics are published as little more than Javascript slideshows with no thought to browser history. Some sites read backwards (chronologically), some forwards.
Man, it's a mess if you just want to pick something up every now and then. It's enough to switch me off to most of them.
##Possible Solution The problem with web comics, I've discovered while writing this post, is me (the reader). I expect too much I think, from something that is a) free and b) created by someone(s) who don't want spend their lives worrying over usability. I suspect they mainly want to write comics.
The solution, I've often wondered, might be to have an episodic web comic publishing framework. A service that allowed content creators to upload at whatever frequency they liked, which would then be collected into an entity that could be navigated and bookmarked by the reader, through an app or a website.
The thing that has stopped me from creating such a service in the past (time aside), is the creator's original motive in uploading the comic in the first place.
I don't know, maybe I'll have stab at that system in the future. If you've made it to bottom of this post, I'd be interested to hear how you manage your web comic reading. If you're a creator, I'd be really interested in hearing about the thought process that goes into publishing a web comic, and whether you see a need for a better delivery system.
]]>I came to Rails from ASP.NET MVC (have I told you about my book), as part of a three pronged transition:
The one thing I remember missing most when transitioning from ASP.NET MVC to Rails, was not being able to render actions within a view. I'm not going to regurgitate Phil Haack's example here (by the way, when did he start working at Github? He was part of my MS dream team). Basically, what we're talking about is rendering a partial that is attached to a controller. This way the logic is as portable as the partial itself, without putting logic into the actual partial; something I used a lot in ASP.NET MVC.
##Where there's a will there's a gem
When I'd convinced myself there wasn't a direct replacement for this functionality, I went about searching for a gem. What I found, was Cells. This is about four years ago now, so I'm happy to see the gem is still so active. It is pretty much a direct replacement for the functionality I was looking for and I did use it for a few projects. Thing is though, it wasn't really clicking with the other Rails devs I was working with.
I think maybe it was an "against the grain", purest, "this isn't the Rails way" sort of reaction. But maybe they just saw what I couldn't; that there is a very easy way to accomplish my specific requirement. Either way, after the initial surge of wanting to use every gem under the sum, you gradually begin wanting to slim down your dependancies, and well, Cells didn't make the cut.
##So, to the point. Helpers
Oh my god, it's so obvious now. For years I was ruefully sticking logic directly into my partial views, thinking "Well, if I can't render actions, what else can I do?". What a doofus.
On a recent project, I was tired of the locals
syntax of a partial I was using quite a lot.
render( partial: "path/to/partial", locals: { param_one: "something" } )
Really tiresome, I know. Anyway, as the partial was being used more, the logic being stuck into said partial was also increasing exponentially.
`
erb partial.html.erb
<%
param_three = false unless defined?( param_three )
if param_two == "Something" param_one = "Something incredibly hideous" end %>
<p class="<%= "yuck" if param_three == true %>> <%= "#{param_two} - #{param_one}" %>
In spite of the disgrace my partial had become, what really irked me was having to type in `locals` every time I rendered the partial. "I know, I'll put it into a helper method", I thought.
``` ruby something_helpers.rb
module SomethingHelpers
def render_something(param_one, param_two, param_three = false)
render( partial: "path/to/partial", locals: { param_one: param_one, param_two: param_two, param_three: param_three })
end
end
And then the revelation, "Hang on a minute, I can put my logic in here as well". Hello.
`
ruby something_helpers.rb
module SomethingHelpers
def render_something(param_one, param_two, param_three = false)
if param_two == "Something"
param_one = "Something incredibly hideous"
end
text = "#{param_two} - #{param_one}"
render( partial: "path/to/partial", locals: { text: text, param_three: param_three })
end
end
``` erb partial.html.erb
<p class="<%= "yuck" if param_three == true %>>
<%= text %>
</p>
Seriously, sometimes I worry about me. I think I probably have this revelation every six months or so, then forget it. Hopefully after writing this, I won't forget again.
]]>{% img centre /images/games/shades.png 175 175 Shades iOS Game Icon %}
##Shades This is a new find for me and I love it. On first appearances, it's a Tetris clone. Actually though, that is not a fair assessment. The game's clever use of shades of colour and how they merge together, makes for an addictive game, that is harder than it looks! I think I've managed to get to level 9 so far.
{% img centre /images/games/monument.jpg 175 233 Monument Valley iOS Game Screenshot %}
##Monument Valley Probably the best puzzle game I've ever played, certainly the most beautiful (of any genre). The game plays on the complexity of Escher style architecture. I completed the game and the two expansions without too much difficulty, that shouldn't put you off though. It's a game that can replayed over and over and the experience is absorbing.
##Osmos It's the longest standing (memorable) iOS game I have, actually on my iPad, but it is also available for iPhone. Also the first game that I installed that actively encourages the use of headphones to enhance the experience. I love Osmos; it's an incredibly relaxing game to play. The difficulty of levels varies greatly. I've not managed to complete the game so far, but I play it more out relaxation than out of a need to push through to the upper levels.
{% img centre /images/games/badland.jpg 248 175 Badland iOS Game Screenshot %}
##Badland It seems to me that when everyone was getting excited about Flappy Bird, what they should have been playing is Badland. The idea of keeping the main character up by tapping isn't a new one. I remember playing games of a similar vein on the Commodore 64 (struggling to remember the name though). Badland is this sort of game, but the obstacles can be tricky and timing is key. The graphics and soundtrack really work for me in playfully steampunk-y sort of way.
{% img centre /images/games/crossy-road.png 175 310 Crossy Road iOS Game Screenshot %}
##Crossy Road My kids introduced me to this one. Cue yet another eye-roll of something that the kids thought they had found, that our generation owned. "It's Frogger!" I exclaimed. "I played this when I was your age", with a smug curl to my lip. I received blank expressions to this reaction, as I usually do when I attempt some sort of generational dominance. The point of Crossy Road, is that it is completely gorgeous to look at. I loved Frogger, now I get to play it on my iPhone with sensational graphics.
##Mirror's Edge I can't believe that this game is no longer available on the App Store! What the eff! This game has single handedly taken more hours of my life than any other game. The multiplayer modes are brilliant, I love playing against my kids on them. The purpose of the game is to get from A to B using the Parkour style. I can only hope that a new one is not far off.
##Honourable Mentions Oh man, I really want to get this post wrapped up, but I can't without mentioning two (well, three) more games.
###N.O.V.A 2&3 I don't own a console capable of playing Halo. I'm not a big console sort of person, but I do love Halo. N.O.V.A 2 was the closest I could find to the experience on an iPhone. It's a great game that my kids and I regularly team up on. Well, we used too, until N.O.V.A 3 came out. #3 was initially quite frustrating, because the multiplayer maps are just so vast, we'd spend the whole game just trying to find each other. We're better now and the game can be a great afternoon of destruction.
###Modern Combat 5 A sort of progression on from N.O.V.A. I hope N.O.V.A isn't dropped because of the popularity of games like Modern Combat, but even so, MC5 is aces. I've completed the solo missions. I'm an absolute disaster at the multiplayer missions though.
]]>I was a manager, a sales person, a support desk, a developer, and was switching these hats constantly throughout the day. My goal on any given day, was to get to the end of the day without any major mishaps. If I managed to progress at all, in terms of significant progression of a project or a more efficient way of doing something, well that was amazing.
The problem was concentration, I couldn't stay focused on a single task, without being distracted by someone or something craving my attention. We experimented with a few different ways of tackling this:
These feel obvious to me now, but they weren't back then and had a huge impact on my ability to get through work. Even combined though, these were not the silver bullets I was after. I'm too nosey for one thing, if someone else picked up the phone, my ears would prick up, brain on overdrive wondering what they wanted and were the requirements being dealt with correctly. And who can blame me, it's my company after all.
I needed a way of maintaining concentration, without completely removing myself from the office. My salvation came in another, now startlingly obvious, epiphany; headphones. It'd never really occurred to me in the past to try headphones, because in the past I've typically opted for in-ear earphones, which are uncomfortable over long periods of time. I'm also pretty negative on headphones in an office environment as rule, because I hate sound leakage. But I didn't see that I had an option and it's made the world of difference. My earphones use noise reduction to completely immerse me in the task at hand.
An unexpected and corny side effect of the headphones has been trust. I am nosey because I believe I need to be involved in every decision, that my opinion always needs to be considered. Switching on the headphones has demonstrated to me that I'm not (always) the centre of the universe, and that many facets of the company can run without my constant interference.
I'm okay with that, because I spend a lot more time on the stuff I like doing now.
]]>My first step was to install Rails using RailsInstaller. This is a great first step for Windows users, as you also get Ruby, Git and DevKit (which is important for building gems that contain native code). I elected to install the Ruby 2.1 version, which at the time of install, was sporting Ruby 2.1.5.
##Invalid Certificate
When running bundle
, I encountered the following error:
Unable to download data from https://rubygems.org/ - SSL_connect returned=1 errno=0 state=SSLv3
As per the accepted answer on this StackOverflow question, I downloaded cacert.pem and placed it here, C:\RailsInstaller. You also need to tell gem
where to find the certificate, which is done by setting a environment variable called SSL_CERT_FILE
. This can be done on a temporary basis by typing the following into Command Prompt":
set SSL_CERT_FILE=C:\RailsInstaller\cacert.pem
##Sqlite Native Running any command related to the local Sqlite db, threw up:
cannot load such file -- sqlite3/sqlite3_native
According to this accepted answer, the problem is caused by the version of the sqlite3
gem not supporting Ruby 2.1.3+ on Windows. The gem needed to be updated to at least 1.3.10.
##Bcrypt
I encountered a similar problem with the bcrypt
gem. I didn't record the nature of the problem, but updating to at least 3.1.7 resolved the issue.
##TZInfo
When starting up the Rails server, I received an error relating to TZInfo::DataSourceNotFound
. According to the accepted answer on this question, Windows needs an additional gem for the tzinfo
gem to work correctly. Add this to your Gemfile
:
gem 'tzinfo-data', platforms: [:mingw, :mswin, :x64_mingw]
##NPM Error For bonus points, I always install Node along with my Rails installations, if only for JavaScript compilation in Sprockets. Node is best installed using the binary from the official website.
Typing npm
into Command Prompt for the first time, returned the following:
Error: ENOENT, stat 'C:\Users\[Username Here]\AppData\Roaming\npm
This issue was resolved by creating the missing npm
folder in Roaming
. Credit goes to the accepted answer of this question.
##Capistrano On the first day of setup, Capistrano worked like a dream. The following day, after a system restart, no dice. Capistrano tasks kept dying with the following:
Error reading response length from authentication socket
I tried reinstalling certificates and ensured the SSH Agent was running, to no avail. I still don't completely understand the problem, but I think the solution has more to do with the PC's specific environment.
SourceTree was already installed (and running) on the PC, when I came to install Rails. As part of the installation, SourceTree installs Pageant, a Windows based SSH authentication tool.
Basically, Capistrano started working again the moment I had the presence of mind to start Pageant again.
NOTE: The PC has two sets of SSH keys setup, one through Pageant, the other through Msysgit. I thought I'd been using the Msysgit key, but I suspect I was using the Pageant one all along. For Capistrano at least, Git works from the command line, regardless of the status of Pageant.
I'm not aware of any dependancy on Pageant by RailsInstaller. So I wonder whether I wouldn't have this dependancy now, if I didn't already have Pageant on the system. Or possibly, I'd have struggled getting Capistrano working at all, not appreciating the need for Pageant.
##Line Endings I'm still not 100% clear what happened here. We manage a number of GIT repos on Windows & Mac, and have not had this issue before. Upon committing changes to a project, from the Windows machine, all the line endings were converted to CRLF. This caused problems with Rake. My inital attempts to fix the issue on a Mac resulted in me corrupting the Sqlite3 development database, so for the remainder of this fix, assume I've temporarily moved the db (along with all other binary files, i.e. images) out of the directory structure.
From the project root, on a Mac, I ran the following:
find . -type f -not -path "./.git/*" -exec perl -pi -e 's/\r\n|\n|\r/\n/g' {} \;
From Linux, you can run:
find . -type f -not -path "./.git/*" -exec dos2unix {} \;
The above, replaces CRLF with LF for all files in the GIT repo.
After readding the database, I ran rails server
to check for obvious issues; all seemed well. As per this Github article, I ran the following on the Windows machine:
git config --global core.autocrlf true
The above, gets GIT to manage line endings on Windows machines, to keep them in sync with GIT's base line ending (LF).
]]>I'd read on a couple of developer blogs, that "around ear" design was the way to go for comfort. Also, that Audio Technica ATH-M50Xs were a good choice for the money, at around £100. I'm a long time fan of Sennheiser, and came across the Momentum range under my own steam. The "around ear" Momentum's are around £200 and are far and away the most gorgeous looking headphones I've ever seen.
Feeling out of my depth though, I asked around a couple of friends. The response was unanimous. If you want to block out sound, it has to be the Bose Quiet Comfort range. The QC25 model is the latest and cost £270! Holy floor Batman, that was not going to be an off-the-cuff decision!
What makes a pair of headphones cost almost £300? The QC25 has acoustic noise cancellation, which I think is basically a series of microphones that detect outside noise, which the headphone then counteracts with a reverse wave, cancelling out the sound. It sounds cool, but I was also concerned the cancelation might work against me, I'm quiet sensitive to electrical noise.
Working on the assumption that Bose aren't the only company capable of such feats, I also found the Sennheiser MM 550-X, which purport to do the same as the QC25s, but with bluetooth and surround sound for the same price! At this point the prices were more like monopoly money in my head.
I'd seen these headphones online, but no way was I going to spend dollar until I'd tried them on! I set out to two city centres and a shopping centre, where I found two Bose shops and a third shop that only had Bose headphones for you to try. Not a snifter of the other headphones!
The QC25s were incredible! The sound was beautiful and crisp. In the busy shopping centre, I switched the noise reduction on and the world slipped away. Wow. The "anti sound" wasn't noticeable to me in the shopping centre.
I wanted to make sure I was making the right choice though, so I hit the internet reviews. I needed noise reduction now, I wasn't going to spend a £100 on a pair of earphones that didn't disconnect me from the world. So the question was, how good are the MM 550-Xs? Even the most glowing review of these headphones, that cited them as the best bluetooth headphones on the market, suggested that surround sound was best turned off and the noise reduction was not on par with the QC25. I was really attracted to the bluetooth, but noise reduction was becoming very important to me.
Fast forward a lot "first world" agonising and several attempts to find a shop that would allow me to try a decent selection of headphones. I conceded though and bought the QC25s.
In the silence of my home, I was initially stuck by two disappointments:
Then I told myself to get a grip. Why would you use the noise reduction in a quiet environment any way. The headphones do leak more noise than I'd like, but I am exceptionally picky in this area.
I've been using them for about two weeks now and I'm entirely satisfied. At work I am blissful ignorance to the world around me. At home, Gravity Rush on the PS Vita is a completely new experience with these bad boys on.
I cannot overstate how comfy they are to wear and the case is invaluable as I am always lugging stuff around in my man bag.
I still have reservations over spending that amount of money on a pair of headphones, but the headphones themselves, I'm doubtless are worth every penny.
]]>The typical structure would be:
root
projects
index.php
project-1.php
project-2.php
The index itself would be provided by the perch_pages_navigation
function. A basic example below:
<?php perch_pages_navigation(array( 'from-path' => '*' )); ?>
_The asterisk assigned to from-path
tells perch_pages_navigation
to work from the current directory ('projects')._
_More information on perch_pages_navigation
can be found here, perch_pages_navigation._
Would output:
<ul>
<li>
<a href="/projects/project-1.php">Project 1</a>
</li>
<li>
<a href="/projects/project-2.php">Project 2</a>
</li>
</ul>
The perch_pages_navigation
function uses the HTML template 'perch/templates/navigation/item.html' to generate the HTML above. This is a file that can be modified, or even replaced using the template
option.
You're restricted in the content that can be displayed in this template. This is because of how data in Perch is grouped. Within a perch_pages_navigation
template, you have access to data related to the page, like the title and path. But you don't have access to content regions, defined using perch_content
, as this not information that is shared across all pages.
Page Content
- pagePath <-- Can access this
Region Content (as defined in a perch_content region)
- some_content <-- Can't access this
As of Perch 2.4, you can extend the amount content saved at a page level, using Page Attributes. Page Attributes can be very useful, but they can't be used to be target a sub-sect of pages (like our project pages), so are not ideal for what we're trying to achieve. I.E You can only add fields that will be available to all pages.
Update: This is not entirely true. You can set an 'Attribute template' per page, in 'Page Options'. Attribute templates allow you to decide which attributes are configurable at a page level. There is a draw back back though; the 'Attribute template' is not saved in a Master page. So it would be down to the user to configure the 'Attribute template' each page. Checkout my post of Page Attributes here.
What is needed is a mechanism, whereby the page order is retrieved from the Navigation part of Perch and the content, from a region designed with our projects in mind.
<ul>
<?php
$nav = perch_pages_navigation(array( // Return navigation pages data as array
'from-path' => '*',
'skip-template' => true
));
foreach($nav as $page) { // Loop through & customise each item returned in the array
PerchSystem::set_var('pageNavText', $page['pageNavText']); // Grab the page title
PerchSystem::set_var('pagePath', $page['pagePath']); // Find the correct links for each page
perch_content_custom('Detail', array( // 'Detail' is the region containing the data we need - this is used in the target page template
'page' => $page['pagePath'], // The dynamic path to the page which contains the target region
'template' =>'project_item.html' // This region reuses data from target pages (image, excerpt)
));
$i = $i + 1;
}
?>
</ul>
The PHP above retrieves our list of pages using perch_pages_navigation
, but this time skipping the template. Setting skip-template
to true
, bypasses the HTML rendering process and returns an array instead. The array itself is a list of all our project pages, including associated page data.
With the array in hand, we can apply it to a standard content region, allowing us to access project specific content. Within the foreach
loop, we grab what we need from the page data (in this case pagePath
and pageNavText
). We'll need the pagePath
value for two reasons; we'll need to know where we're linking too, but also we can use pagePath
to summon up our project specific content.
See this line below, it's instructing perch_content_custom
to go to the project page for the content to populate our template ('project_item.html').
'page' => $page['pagePath'], // The dynamic path to the page which contains the target region
That's really powerful, but I've skipped two things:
Where possible I tend to organise my page templates into as few content regions as possible, the primary region typically being called 'Detail'.
Let's assume we're project title is being inferred from the page title. Below is the Detail region ('project_detail.html'), which includes a description, an image and a list of features.
<div class="desc">
<perch:content id="desc" type="textarea" label="Description" html="true" editor="ckeditor" imagewidth="640" imageheight="480" />
</div>
<div class="two-col">
<div class="image">
<img src="<perch:content type="image" id="image" label="Image" width="800" />" alt="<perch:content type="text" id="alt" label="Description" required="true" help="e.g. Photo of MD John Smith with his best wig on" title="true" />" />
</div>
<div class="feat">
<ul>
<perch:repeater id="features" label="Features">
<li>
<perch:content type="text" id="feature" label="Feature" />
</li>
</perch:repeater>
</ul>
</div>
</div>
Since the inclusion of Repeaters within content templates, it's become much easier to create self contained content regions. Before repeaters, the moment you hit an image gallery or feature list, you'd need to duck out of your primary content region and create a new repeating content region. Leading to fun naming conventions like 'Detail - Top' and 'Detail - Bottom', with 'Feature List' stuck in the middle.
We'll need to some an additional fields to 'project_detail.html',for us to access in the index page, a thumbnail and an excerpt:
<perch:content id="thumbnail" type="image" label="Thumbnail" width="310" height="160" crop="true" required="true" help="Recommended image size: 310px wide & 160px high" suppress="true"/>
<perch:content id="excerpt" type="textarea" label="Excerpt" html="false" imagewidth="640" imageheight="480" suppress="true" />
Both fields have suppress
set to true
, meaning that they fields available for input, but will not appear in the resulting HTML for the template. We want the use to be able to enter an excerpt, but we don't want the except to appear on the detail page.
So, what does 'project_item.html' look like? You can see it below:
<article>
<h2>
<perch:content id="pageNavText" />
</h2>
<div class="project-thumb">
<img src="<perch:content id="thumbnail" type="image" width="310" height="160" crop="true"/>" alt="" class="img-responsive" />
</div>
<div class="project-detail">
<div class="excerpt">
<p>
<perch:content id="excerpt" type="textarea" />
</p>
</div>
<a href="<perch:content id="pagePath" />">
VIEW CASE STUDY
</a>
</div>
</article>
The title is inferred from the pageNavText
variable (set in the foreach
loop), likewise the url comes from pagePath
. The thumbnail and the except are retrieved using standard Perch content tags, as it is after all content being rendered using perch_content_custom
.
Your resulting index page will look something like:
<article>
<h2>
Project 1
</h2>
<div class="project-thumb">
<img src="/images/project-1.jpg" alt="" class="img-responsive" />
</div>
<div class="project-detail">
<div class="excerpt">
<p>
Project excerpt
</p>
</div>
<a href="/projects/project-1.php">
VIEW CASE STUDY
</a>
</div>
</article>
<article>
<h2>
Project 2
</h2>
<div class="project-thumb">
<img src="/images/project-2.jpg" alt="" class="img-responsive" />
</div>
<div class="project-detail">
<div class="excerpt">
<p>
Project 2 excerpt
</p>
</div>
<a href="/projects/project-2.php">
VIEW CASE STUDY
</a>
</div>
</article>
As you can see above, we now have an index page with links to the sub pages, including a brief excerpt and a thumbnail for each.
The advantages of this solution are:
perch_pages_navigation
, the page will honour Perch's ordering functionality. perch_content_custom
means that we can create fields specifically for use in project pages, that don't bloat the page data of non project pages.The problem with the solution, is that multiple calls to the database are required to display the list. One to retrieve the navigation list, and then an additional call for each of the pages returned.
A solution that reduced the amount of database calls, while still harnessed the power of the Perch templating system, might to write a SQL statement manually, the joined the navigation SELECT
statement to a statement retrieving the desired content data, then parsed each of the results through the templating system. This might be something I investigate, should the volume of sub pages, demonstrate a noticeable slowdown of load time.
I think we can all safely assume going into this post, that the use of the word 'men' is used strictly in the non-specific gender meaning of the word. Definitely.
Well that was weird. Let me take you through thought process, step by step:
With my brain stuck on point 5, my own list must be heard! So my, nay THE list of funnest men humans of ALL TIME, with every chance that it may or may not be factually accurate, is:
##Richard Pryor The Chive got this one right, holy cow this man was funny. I first encountered Prior, as every good nerd only could have, in Superman 3.
##John Cleese Of all the Brits not to make any american list of funny people, it's John Cleese I'm amazed was missing. He's done so much it's hard to name favourite, but I'm going to try; Fawlty Towers. Or maybe Life of Brian.
Mr. Cleese also has a lot of interesting to say about creativity. He has certainly helped me look how I work creatively.
##Sarah Silverman The Sarah Silverman Show has made me cry why laughter on more than one occasion. She does a surreal, irreverent style of comedy that... I wish there was more of it. (North American funny lady)
##Dylan Moran I've been in love with this man's comedy since Black Books. I like to believe it's actually how Dylan Moran lives his life.
##Dawn French Dawn French is an absolute must for this list, for so many reasons; from Comic Strip to Vicar of Dibley.
##Sean Lock I find Sean Lock's view of the world hysterical.
##Robin Williams He had me a Mork. I've not stopped laughing since.
##Reginald D. Hunter I love what I perceive to be British humour, and struggle with a lot of 'established' American comedians. Reginald D. Hunter just seems to get it.
##Morecambe & Wise Is it fair to have a double act? If this was a countdown, there'd be my number one.
##Eddie Izzard He was the comedy soundtrack to my twenties. It's the bird flying by the plane that gets me every time.
##Tommy Cooper "What's this?" "A dead one of these."
There is a lot of pressure writing a list like this. I've given up trying to find CC images for them. I don't know how other people do it. Billy Crystal and Bill Bailey certainly belong on the list as well, I'm not sure why I haven't added them.
Stephen Fry, who I find immensely funny, also isn't on the list. A travesty.
I love Richard Ayoade's dry sense of humour. Which reminds me, Katherine Parkinson makes The IT Crowd for me.
I think there is actually loads of people of missed out and I'm also uncomfortable with the lack of order. It's not well presented list and you probably shouldn't take it too seriously.
I also find Adam Buxton extremely funny. I hope there is a new series of Bug in the works.
What would your list of funny people look like? I'd love to hear.
]]>##Introduction I recently watched this video Are You Sitting Too Much? and the followup 9 Tips To Save Your Life. Please watch them if you sit for long periods of time, they're not long and you learn a lot.
I really had no idea, which is irritating because it makes perfect sense. It makes perfect sense, because even in my ignorance, I have already seen the benefits yielded from breaking up my work. Having unwittingly lost half a stone this year, without changing my diet (next year's resolution).
The tips in the second video are insightful, but didn't really speak to my specific situation. So, I've come up with a few ideas for how I might tackle the issue.
##Use a Timer We use a service called Freckle to track the time we spend on projects. On the Mac app at least, the timer beeps once an hour; this I believe is there to indicate that the timer is still running. It seems to me though, that it is also a great reminder to get up and stretch your legs for five minutes.
Now, while I adore Freckle and it's beautiful reporting, it is quite expensive. $49 per month for a team of five. I strive for greater efficiency, if only so one day I feel I can survive the week without Freckle. It is certainly too much to spend on a simple timer to remind you to get up and have a wander.
If I was without Freckle and facing this issue, I'd recognise my personal cycles of work differ at different times during the day. That is to say, that I would be much aggrieved if I was interrupted from my work after a mere hour, first thing in the morning, but would take such a distraction as a blessing mid-afternoon. Instead, I'd chop my day into sprints of different lengths, and set up timers accordingly. This would have the extra benefit of adding a bit more structure to my day, by understanding when I'm more effective at longer or shorter tasks.
##Work Standing I've long enjoyed the idea of these adjustable desks, that allow you to work seated or standing. They are very expensive though and I wonder how I would actually get on coding for long periods of time, while standing.
Coding aside, I think a lot of activities can benefit from standing.
###Catch Up Meetings The most obvious example is meetings. Quick catch up meetings have more impact when standing, the same meeting while seated would take twice as long.
###Phone calls I'm a pacer on the phone, so this is an easy one for me. It can be quite distracting for others, but I'll get most of my steps throughout the day just by talking on the phone.
###Poor man's adjustable desk This is crazy idea I've had, that I think I'd like to trial. I don't think I'd code standing up, but there is a bunch of other stuff that I would do on a computer while standing. Clearing up email, project management, or testing comes to mind.
The premise of my idea is that my development work requires a beast of a machine, but other tasks may not. I'd have two desks, or at least have access to a second.
####Desk One Desk One, as it has always been, comfy with massive computer in front of me.
####Desk Two More like a shelf or a "hot desking" coffee shop table at standing height, with an inexpensive laptop (like a Chromebook) setup.
Every time you want to perform an admin task, you get up and walk to your admin shelf. It really doesn't need to be much more than that, if you're going to start leaning, you're using the space for the wrong task.
##Conclusion I'm really excited about setting up sprints. I like that I'd be able communicate clearer expectations of when I'm approachable to my co-workers, after understanding my own process more clearly.
I'm also falling in love with the idea of an admin shelf, or maybe not an admin shelf, maybe an area that is more communal. A standing desk area, that by virtue of standing there says, "Hey I'm approachable".
##Conclusion on Conclusion What's weird about my conclusion, is that it doesn't talk about my ideas in context of standing more; which is the point of the post. I seem to be more concerned with disruptions in my workflow and being approachable to my co-workers. I think that's interesting, because for me, it's all interconnected.
By playing with these techniques, I don't just prevent the onset of having a fat arse, I also become more productive and collaborative in my work.
]]>This is a repost of http://dogma.co.uk/blog/10-converting-svn-to-git
We've recently set about converting all our old SVN repositories to GIT. The process is quite easy thanks to the git svn
command, but there are some gotchas. So, I'll detail the process below.
Every revision in a SVN repository has an author, these authors need to be migrated to the new GIT repository; which is done by compiling a text file listing the existing SVN username along with the author's new GIT equivalent. The format of the text file is as follows:
svn_username = GIT User Name <user@dogma.co.uk>
You can list as many users in this file as you like, duplicating the GIT details if required. To generate a list of the SVN author's run the following with the SVN repo:
svn log --xml | grep author | sort -u | perl -pe 's/.>(.?)<./$1 = /'
A potential gotcha here is that git svn
will fail if the SVN username has spaces in it. This caught me out as our older SVN repos were originally hosted on a Visual SVN Server, which used the username Visual SVN
. If you have a username with spaces in it, you must change that username in each revision the author is attached to.
To identify the offending revisions, run:
svn log | sed -n '/svn_username/,/-----$/ p'
Then to fix the username, run the following on each revision:
svn propedit svn:author -r revision --revprop svn_url
Once you have created an authors file (usually called authors.txt), run the following in an empty directory to clone the SVN repo into a new temporary GIT repo called git-tmp
:
git svn clone --stdlayout --no-metadata -A authors.txt svn_url git-tmp
Change directory into git-tmp
and run the following to fetch the SVN repo structure:
git svn fetch
Now you'll want to link the temporary repository to your destination remote GIT repo, by running the following:
git remote add remote git_url
git push -u remote master
The commands above will only push the master (what was trunk) to the remote repo. Currently, any branches you have in the SVN repo only exist as remote references in git-tmp
. To make these references local branches and then push them to the server, run the following for each branch you would like to keep:
branch=branch; remote=remote; git checkout -b $branch remotes/$branch; git push -u $remote $branch; git checkout master
Providing all went well, you can now discard the temporary GIT repo and clone a fresh copy of your new remote GIT repo.
]]>This is a repost of http://dogma.co.uk/blog/1-content-management-with-perch
Perch, if you not already aware is a curious little PHP CMS by British design agency edgeofmyseat.com. Curious because in an ecosystem dominated by feature rich, open source, free CMSs like WordPress and Drupal; Perch provides only one feature out of the box and costs about £40 per site including VAT. Curious because given this information, I'm still overwhelming drawn to Perch for a lot of my projects.
Why I'm drawn to Perch is the simplicity of the CMS itself. edgeofmyseat.com have found niche between the handful of static HTML files in a directory and site built from the ground up to be content managed.
WordPress for instance, makes it very easy to create a very manageable site in minutes, literally. I dropped the folder on to my server, ran the installer and two minutes later I had a fully content managed site. Amazing, and with such a large community supporting WordPress I had customised the look, feel and function of my site with some of the thousands of free themes and plugins on offer.
But what about the site I already have? The individual HTML pages, strung together with anchor tags and a splash of PHP or similar as the contact form required it. Or the client who is intimidated (or just disinterested) with admin screens filled with Posts, Comments, Plugins and Tools?
If I was to content manage that static site with WordPress, I'd have to extract a template from the many pages built by my predecessors and create a theme. Assuming that is the pages have retained a uniform appearance over time. Then I manually recreate each of the pages in the CMS and create (or find equivalent) plugins for all the little bits of bespoke functionality that WordPress doesn't quite deal with. The horrible feeling that developers get when retreading old ground.
Or, you install Perch. Perch is designed to let you work the way you want. You create your pages, your structure, add your images and navigation. Out of the box, Perch does one thing really well. When you're creating your pages and you come across section that needs to be edited by the client, drop in a content tag.
<?php perch_content("Dynamic Content"); ?>
You also need to tell Perch to watch the page; you do this by adding the following line to the top of the page:
<?php include("../perch/runtime.php"); ?>
What happens is this; when you subsequently load that page. Perch queries the database to see if it has any content for page X called 'Dynamic Content'. If Perch doesn't have a content region in that location, it creates one. The next time you log into Perch, you'll be provided with a content region called 'Dynamic Content' flagged as new.
Clicking on the region gives you the opportunity to define the content type from many built-in templates ranging from text to Google Analytics to images to blocks of code. If there isn't a template that suits your needs, create your own. You can also decide whether the content is recurring (like a list of posts) and whether content is to be shared between multiple pages.
So to go back to the example of that static site. Your client just wants to change the text on the front page, or update the news feed themselves. Just add a couple of PHP tags to the desired page, set the content type in the CMS and away you go. Replicating the existing news feed is as easy as copying one of the existing news items and pasting it into a new template, replacing the content with perch tags.
<perch:content id="heading" type="text" label="Heading" />
In summary, Perch is easy to setup (install process similar to WordPress) and makes it incredibly easy to add content management to existing sites on PHP capable web servers. Perch also contains an elegant API for extending the core functionality in the form of Apps. Apps available for download from the Perch site provide blogging functionality and dynamic page creation amongst other things. Hopefully I'll get a chance to cover Apps and App development in another post. If you regularly deal with legacy sites or just want to simplify things a bit, I recommend you check Perch out.
]]>