summaryrefslogtreecommitdiff
path: root/_posts
diff options
context:
space:
mode:
authorBen Burwell <ben@benburwell.com>2018-09-17 22:12:24 -0400
committerBen Burwell <ben@benburwell.com>2018-09-17 22:12:24 -0400
commitcec95aa2559fc095e4351e5dc69f2268f6350651 (patch)
tree3cc4e8e06af7911008855505117cc76b5628b1be /_posts
parent208821cda116bab74e657d6f0c2cd7c23eff9610 (diff)
General cleanup
Diffstat (limited to '_posts')
-rw-r--r--_posts/2012-08-18-american-education-reform.markdown26
-rw-r--r--_posts/2012-08-19-art-versus-design.markdown33
-rw-r--r--_posts/2012-08-20-interoperability-and-firstnet.markdown25
-rw-r--r--_posts/2012-08-25-the-apple-samsung-battle.markdown18
-rw-r--r--_posts/2012-12-12-mobile-design-paradigm.markdown20
-rw-r--r--_posts/2013-01-13-unified-show-control.markdown12
-rw-r--r--_posts/2013-12-13-helvetica.markdown30
-rw-r--r--_posts/2014-04-23-quick-application-launcher-for-os-x.markdown23
-rw-r--r--_posts/2014-04-28-forest-printer-management.markdown30
-rw-r--r--_posts/2014-05-01-migrating-to-github-pages-and-jekyll.markdown44
-rw-r--r--_posts/2014-05-01-migrating-to-github-pages-and-jekyll.md105
-rw-r--r--_posts/2014-05-03-printing-at-muhlenberg.markdown22
-rw-r--r--_posts/2014-05-03-printing-at-muhlenberg.md31
-rw-r--r--_posts/2014-05-31-less-file-compilation-for-jekyll-github-pages.markdown26
-rw-r--r--_posts/2014-05-31-less-file-compilation-for-jekyll-github-pages.md36
-rw-r--r--_posts/2014-09-30-what-is-two-factor-authentication-and-why-does-it-matter.markdown29
-rw-r--r--_posts/2014-10-10-open-bug-tracking-empowers-users.markdown29
-rw-r--r--_posts/2014-10-11-configuring-cloudflare-universal-ssl.markdown49
-rw-r--r--_posts/2014-10-11-configuring-cloudflare-universal-ssl.md85
-rw-r--r--_posts/2014-12-14-showoff.markdown19
-rw-r--r--_posts/2014-12-14-showoff.md38
-rw-r--r--_posts/2015-01-15-optimizing-css.markdown44
-rw-r--r--_posts/2015-01-16-your-website-is-not-special-dont-make-visitors-make-accounts.markdown32
-rw-r--r--_posts/2015-01-16-your-website-is-not-special-dont-make-visitors-make-accounts.md61
-rw-r--r--_posts/2015-03-28-reset-forgotten-password-on-luks-encrypted-ubuntu.markdown38
-rw-r--r--_posts/2015-03-28-reset-forgotten-password-on-luks-encrypted-ubuntu.md51
-rw-r--r--_posts/2015-03-29-visualizing-congress-with-d3.markdown76
-rw-r--r--_posts/2015-04-23-getting-login-to-work-ubuntu-15.04-nvidia.markdown25
-rw-r--r--_posts/2015-04-23-getting-login-to-work-ubuntu-15.04-nvidia.md33
-rw-r--r--_posts/2015-06-01-facebook-now-sends-pgp-encrypted-email-notifications.markdown25
-rw-r--r--_posts/2016-04-08-whitelisting-tor-on-cloudflare.md (renamed from _posts/2016-04-08-whitelisting-tor-on-cloudflare.markdown)5
31 files changed, 441 insertions, 679 deletions
diff --git a/_posts/2012-08-18-american-education-reform.markdown b/_posts/2012-08-18-american-education-reform.markdown
deleted file mode 100644
index a26c335..0000000
--- a/_posts/2012-08-18-american-education-reform.markdown
+++ /dev/null
@@ -1,26 +0,0 @@
----
-title: American Education Reform
-description: Thoughts on typography and education.
-date: 2012-08-18 00:00:00
-category: writing
-layout: post
-redirect_from: "/writing/american-education-reform/"
----
-
-This was going to be a snarky piece on how good typographic practice is rarely found outside of the professional realm, but nobody would want to read that. Except, perhaps, for other typography nerds. And that is part of what I have to say. But a small part.
-
-<!--more-->
-
-Our public education system has declined in effectiveness to the point of being nearly worthless. The basic subjects — writing, reading, arithmetic, history, and geography — have remained static for a century while the world around us has changed immensely.
-
-It is time to introduce new subjects into the basic collection that every child learns. We must teach electronics, robotics, and programming starting early on. It is absolutely essential that every child have some basic understanding of these in the modern world. (There is research to suggest that POGIL may be especially relevant and effective in STEM education.) We must expand our education in the arts and music, inspiring creativity and aesthetic sensibilities.
-
-But not only must we vary our subject matter; we must be prepared to accept sub-stellar performance in one or two areas in compromise for truly great understanding in others. In short, we need to expose children to a wider variety of material so as to determine their natural abilities and focus their education in those areas. By lowering the bar in some areas, we can raise it in others. So, for instance, instead of requiring a 60 to pass on each of five tests, we allow a 50 to pass on one of them so long as all the others are above a 75.
-
-This is not to say that a student who excels in historical recollection and analysis should not also learn arithmetic. They should simply not be forced to perform at as high a level as a student whose gifts are in that field. The purpose of elementary education should be primarily to inspire creativity and a passion for knowledge in a generation of innovators in whatever field they choose. Additionally, students must gain at least basic knowledge in all fields of study.
-
-Now for my shameless typographical education spiel: it is rather a silly thing that students are being required to use computers to write papers but are not being instructed in their proper use. Papers handed in to English teachers (or any teacher, for that matter) should be graded not only on structure, spelling, and grammar, but also on typographical style. There are correct and incorrect ways to set type, and in an age where it is so easy to do it correctly, it is shameful that we don’t inform students of what the proper way is.
-
-I have focused on the elementary education system because it is the component I have the most distance from. I would find it difficult, as a college student, to write objectively about college education. The topics I have discussed apply, to some extent, to high school education as well.
-
-_N.B. The ideas presented here do not represent a fully functional plan (obviously). Rather, they are intended to be food for thought. Let me know what you think [@bburwell](https://twitter.com/bburwell)._
diff --git a/_posts/2012-08-19-art-versus-design.markdown b/_posts/2012-08-19-art-versus-design.markdown
deleted file mode 100644
index ae1d89e..0000000
--- a/_posts/2012-08-19-art-versus-design.markdown
+++ /dev/null
@@ -1,33 +0,0 @@
----
-title: The Difference Between Art and Design
-description: The subtle differences between art and design and their impact on society.
-date: 2012-08-19 00:00:00
-category: writing
-layout: post
-redirect_from: "/writing/art-versus-design/"
----
-
-As I was skimming [a list of observations on art versus design][list], I was struck by one entry in particular:
-
-> Genuinely honest art is created without the market in mind — you are simply creating. Design is created with the market in mind — and the medium does not matter. If you’re a musician or painter, and purposefully crafting your work in order to sell, you’ve become a designer.
-
-<!--more-->
-
-This particular explanation, unlike some, seems to be quite specific and leave no question as to which category a given work falls under. (There are several other excellent quotes on the list; I highly recommend reading it through.)
-
-This particular line, though, made me wonder why we still call some things _art_. Immediately, the music industry popped into my mind. There are certainly many musicians who are truly artists, but they aren’t the ones that get rich off their music. The people who get rich off their music, more often than not, are simply writing songs (and if you’re lucky, they’ll be the ones actually writing them) about things that people will buy. These so-called “artists” slave away in recording studios, repeating each line over and over until its delivery is approved by the marketing team. Hollywood faces the same situation. This commercialization of art is not in fact art at all, but design. Hearing popular musicians revered as “artists” always sounded a little wonky to me.
-
-There several other entries on the list comparing the subjectivity of art to the objectivity of design. This is another of the fundamental differences. As one quote states, in art, “red” can never be _wrong_, while in design, “red” _can_ be wrong and specific reasons for its being so can be enumerated.
-
-This is not all to say that art is pure and design is evil. They are similar expressions with different intentions. As a designer, I find fascination in poring over every detail of a project and making sure it is perfect. I like to build things, then tear them apart and make them better. Reason must be applied to the creative process. If something won’t make sense to the user, it can’t be part of the project. Design is the implementation of subconscious communication.
-
-Based upon all I’ve discussed and read about the differences between art and design, here is a short list of my distillation:
-
-* Art makes you think; design makes you do.
-* Design is making things simple, while art is making them complex.
-* Art can’t be wrong, but design can.
-* Design is creating the world; art is interpreting it.
-* Design is consistent; art is spontaneous.
-* Art is for the artist; design is for the user.
-
-[list]: http://reinholdweber.com/2012/04/11/random-observations-about-art-vs-design/
diff --git a/_posts/2012-08-20-interoperability-and-firstnet.markdown b/_posts/2012-08-20-interoperability-and-firstnet.markdown
deleted file mode 100644
index b6f74ec..0000000
--- a/_posts/2012-08-20-interoperability-and-firstnet.markdown
+++ /dev/null
@@ -1,25 +0,0 @@
----
-title: Interoperability and FirstNet
-description: The United States is finally putting real effort into building a nationwide public safety network, but there are serious problems that need to be addressed.
-date: 2012-08-20 00:00:00
-category: writing
-layout: post
-redirect_from: "/writing/interoperability-and-firstnet/"
----
-
-The United States is finally putting real effort into building a nationwide public safety network with [FirstNet], the First Responder Network Authority. FirstNet has been tasked by Congress to build, deploy, and maintain a nationwide broadband network for use by public safety agencies in order to provide completely interoperable communications.
-
-<!--more-->
-
-While I applaud this effort, there are several potential issues that should be addressed. To begin with, a centrally controlled network of this scale would present large reliability problems. In numerous occasions, communities that have rolled out digital or trunked radio systems with many system components have had failures due to a tower or controller failing, leaving all users in the affected area with no way to communicate with each other or with users in other parts of the system. This is not only a risk for naturally occurring phenomena, such as a power outage or [overheating], but also create centralized targets for terrorist attacks. If an attack was planned, it would be relatively simple to first bring down the local tower, thereby preventing all communication in the area. Therefore, a high degree of redundancy must be implemented, as well as physical infrastructure safeguards, which are both technically complex and expensive.
-
-It is not clear whether the new broadband system would completely replace all existing public safety communication systems, or if it would simply be used to supplement them in situations where inter-agency coordination is required. The question also arises as to which agencies will use the system. In addition to the many public and governmental agencies that would be involved in the response to a major incident, there are also many <abbr title="Non-governmental entity">NGO</abbr>s, such as the Red Cross and Salvation Army that are often involved. Would they be permitted to use the network?
-
-In my opinion, it is much better to have a simpler but more robust interoperability plan. Steps have already been taken in this direction, such as the implementation of several nationwide interoperability frequencies on each band. On the VHF-high band, there are five channels set aside that all agencies have blanket authorization to use for interoperability purposes. This is the <abbr title="Keep It Simple, Silly">KISS</abbr> principle in practice. With simple narrowband FM voice modulation that nearly every existing radio supports, there is no need to add infrastructure or purchase additional assets. Additionally, as using a single frequency is directly radio-to-radio, there is no reliance on an outside device for control, and it is no more subject to jamming than the proposed 700 MHz system would be.
-
-Besides all of the technical problems that could (and certainly will) arise during the construction of the system, it almost seems superfluous. After all, we already have a nationwide broadband network of cell phones. The resources that would be allocated to developing the new system could be better utilized in hardening and improving the cellular network. Perhaps a public safety system could piggy-back on existing infrastructure. I also believe that strengthening the established interoperability frequencies with repeaters would be a much cheaper and more effective way to implement the desired outcome. When working with a large incident, it is exceedingly rare that a responder would need to communicate over more than a mile or two, something that is quite feasible with modern handheld radios used by nearly all agencies. If wider coordination is required, a small number of individuals would need to communicate over longer distances, but this could be done just as well by telephone, cellular, or VoIP.
-
-Perhaps the network will be more useful than it currently seems. However, it is not clear to me at the present time that it will properly address a _bona fide_ need in a cost-effective and reliable manner. As a system increases in complexity, more points of failure are introduced, and the less reliable it becomes, all other factors being equal. And speaking from personal experience, reliability is the number one highest priority for first responders, followed closely by simplicity. The tools we use in the field need to “just work.”
-
-[FirstNet]:http://www.ntia.doc.gov/category/public-safety
-[overheating]:http://www.sfgate.com/bayarea/article/Oakland-police-radios-fail-during-Obama-visit-3736022.php
diff --git a/_posts/2012-08-25-the-apple-samsung-battle.markdown b/_posts/2012-08-25-the-apple-samsung-battle.markdown
deleted file mode 100644
index 49048fa..0000000
--- a/_posts/2012-08-25-the-apple-samsung-battle.markdown
+++ /dev/null
@@ -1,18 +0,0 @@
----
-title: The Apple/Samsung Battle
-date: 2012-08-25 00:00:00
-description: What Samsung did is not “theft.” There is no doubt that they blatantly copied some of Apple’s design elements, so based on our current legal system, Apple certainly had every right to pursue damages.
-layout: post
-category: writing
-redirect_from: "/writing/the-apple-samsung-battle/"
----
-
-On August 24, a jury in San Jose, California awarded $1,049,343,540 to Apple after Samsung was found to be in violation of their software and design patents. This case is monumental not because of the actual damages to be paid by Samsung, but because of the precedent it sets. There is no question that Samsung’s designs were inspired by (perhaps even copied from) the iPhone and iPad. In a statement following the ruling, Apple hailed the ruling “for sending a loud and clear message that stealing isn’t right,” while Samsung stated that the verdict “will lead to fewer choices, less innovation, and potentially higher prices. It is unfortunate that patent law can be manipulated to give one company a monopoly over rectangles with rounded corners.”
-
-<!--more-->
-
-While both statements contain convincing rhetoric, there is no direct contradiction, suggesting the possibility that they actually are both true. I am a firm believer in the idea that patents stifle innovation. I also believe that stealing is not right. However, what Samsung did is not “theft.” There is no doubt that they blatantly copied some of Apple’s design elements, so based on our current legal system, Apple certainly had every right to pursue damages.
-
-But to me, and many other consumers, the iPad is still a superior product to Samsung’s tablet. If Samsung was able to create a better product than Apple, perhaps including some of Apple’s design elements, shouldn’t they have every right to profit from it? This is true capitalism. Whoever takes the first step into a new design should not have a monopoly on further developments. As a commenter on a New York Times article on the matter wrote, why are all wheels round? Why do nearly all cars have four of them?
-
-Our patent system should be abolished.
diff --git a/_posts/2012-12-12-mobile-design-paradigm.markdown b/_posts/2012-12-12-mobile-design-paradigm.markdown
deleted file mode 100644
index b2e1d03..0000000
--- a/_posts/2012-12-12-mobile-design-paradigm.markdown
+++ /dev/null
@@ -1,20 +0,0 @@
----
-title: Changing Mobile UI Design Paradigm
-description: When iOS was first introduced, it was filled with beautiful, glossy icons with shadowing and reflections. However, there’s been a shift in the UI design as the operating system has matured.
-date: 2012-12-12 00:00:00
-category: writing
-layout: post
-redirect_from: "/writing/mobile-design-paradigm/"
----
-
-When iOS was first introduced, it was filled with beautiful, glossy icons with shadowing and reflections. However, there’s been a shift in the UI design as the operating system has matured.
-
-<!--more-->
-
-The glass effect does not have to be built into icons, it appears there by default. Perhaps wisely, Apple elected to make the glass effect optional though, by use of a setting when bundling a new app. This resulted in many third-party developers building their own icons pixel for pixel and choosing not to apply the glass effect. Now, icons that do use the glass effect seem far outnumbered by those that don’t.
-
-When I installed the new Gmail app on my iPhone, I was struck by the beauty of the UI. It embodies the shift we’ve seen in icon design in its large, clean buttons and general lack of three-dimensionality. It transforms the device from a faux-3D space to a planar surface that you can interact with.
-
-The iOS UI has gradually been moving in the same direction, but with nothing near as jarringly different. In iOS 6, the status bar lost its gradient, becoming just a solid color that changes contextually.
-
-My guess is that we will continue to see the UI progress toward this new paradigm. Our mobile devices don’t need to be bright and colorful, they need to be functional. What is interesting to note is that Apple, a company championed for its brilliant industrial and UI/UX design, has fallen behind the curve to Google in this regard. I predict that after the release of the Gmail app, we will see Apple accelerate in this direction with its own products.
diff --git a/_posts/2013-01-13-unified-show-control.markdown b/_posts/2013-01-13-unified-show-control.markdown
deleted file mode 100644
index 0d20336..0000000
--- a/_posts/2013-01-13-unified-show-control.markdown
+++ /dev/null
@@ -1,12 +0,0 @@
----
-layout: post
-title: Unified Show Control
-description: A paper on unifying all aspects of theatrical show control.
-date: 2013-01-13 00:00:00
----
-
-For my freshman writing seminar at Muhlenberg, I wrote a paper on a system I devised for controlling many different theatrical cueing consoles from one master console using MIDI Show Control (MSC). I called my system [Unified Show Control [pdf]](/assets/pdf/Unified_Show_Control.pdf).
-
-Shortly after finishing this project, I discovered that [QLab][] from Figure 53 already has MSC built into it. Though it was slightly disappointing, I was thrilled that my idea already exists, albeit in a slightly different form.
-
-[QLab]: http://figure53.com/qlab/
diff --git a/_posts/2013-12-13-helvetica.markdown b/_posts/2013-12-13-helvetica.markdown
deleted file mode 100644
index 8d709fb..0000000
--- a/_posts/2013-12-13-helvetica.markdown
+++ /dev/null
@@ -1,30 +0,0 @@
----
-layout: post
-title: Helvetica for Safari and Chrome
-description: Those who believe the web should be made more beautiful will appreciate this extension for Safari and Google Chrome that makes all text display in Helvetica Neue (with regular old Helvetica as a backup).
-date: 2013-12-13 00:00:00
----
-
-Those who believe the web should be made more beautiful will appreciate this extension for Safari and Google Chrome that makes all text display in Helvetica Neue (with regular old Helvetica as a backup).
-
-<!--more-->
-
-Installing Helvetica in Safari
-------------------------------
-
-* [Download Helvetica](http://updates.benburwell.com/safari/helvetica/latest.safariextz) to your computer.
-* Click on the Downloads icon in the toolbar.
-* Double-click on `helvetica.safariextz` to install.
-
-Installing Helvetica in Google Chrome
--------------------------------------
-
-* [Download Helvetica](http://updates.benburwell.com/chrome/helvetica/latest.crx) to your computer.
-* Click the ![triple bar](/assets/images/icons/settings-icon.png) icon on the Chrome toolbar
-* Select Tools > Extensions.
-* Locate the extension file on your computer and drag the file onto the Extensions page.
-* Review the list of permissions in the dialog that appears. If you would like to proceed, click Install.
-
-It’s not perfect; there will be some text that is not Helvetica since this is simply the application of a stylesheet. If a site is using significant amounts of JavaScript, some text may not be transformed. This will be corrected in later versions.
-
-For the most part, fonts will be replaced on sites that don’t have very specific typography. In general, you’ll find that sites that have put care into their typeface choices will have those choices preserved.
diff --git a/_posts/2014-04-23-quick-application-launcher-for-os-x.markdown b/_posts/2014-04-23-quick-application-launcher-for-os-x.markdown
deleted file mode 100644
index f66dad2..0000000
--- a/_posts/2014-04-23-quick-application-launcher-for-os-x.markdown
+++ /dev/null
@@ -1,23 +0,0 @@
----
-layout: post
-title: Quick App Launcher for OS X
-description: How to remap your keyboard to quickly launch applications.
-date: 2014-04-23 00:00:00
-category: writing
-redirect_from: "/writing/quick-application-launcher-for-os-x/"
----
-
-I’ve been using [Alfred][] for some time now as an application launcher. If you’re not familiar with application launchers such as Alfred, it’s essentially Spotlight supercharged. It can find and launch applications, open files, perform custom web searches, even shut down your computer for you — all from commands you type in.
-
-<!--more-->
-
-Like many other aspects of Alfred, it has highly customizable settings for the key combination used to activate it. For a long time, I used a double-tap of the Option key, but I felt as though there must be a better solution. Inspired by the Google’s decision to replace the traditional Caps Lock key with a “Search” key on the Chromebook, I started poking around on the web.
-
-Enter [PCKeyboardHack][].
-
-This small application allows you to remap your keyboard as desired. I installed it with no hassle and easily found a checkbox for “Change Caps Lock Key.” I used a keycode of `101`, which corresponds to the F9 key, one that I very rarely use. To complete the setup was a matter of opening Alfred’s preferences and setting the Alfred hotkey to F9 by hitting caps lock. Now I can activate Alfred to launch applications with a quick press of the caps lock key.
-
-The choice of F9 was mostly arbitrary; I wanted a key that I never or almost never used, as well as one that Alfred could use as a hotkey with a single press. If you use your function keys regularly, it might be wise to seek another unused key.
-
-[Alfred]: http://www.alfredapp.com
-[PCKeyboardHack]: https://pqrs.org/macosx/keyremap4macbook/pckeyboardhack.html
diff --git a/_posts/2014-04-28-forest-printer-management.markdown b/_posts/2014-04-28-forest-printer-management.markdown
deleted file mode 100644
index cf32e5c..0000000
--- a/_posts/2014-04-28-forest-printer-management.markdown
+++ /dev/null
@@ -1,30 +0,0 @@
----
-layout: post
-title: Forest™ Printer Management System
-description: For my Software Engineering class, we built a printer management infrastructure.
-date: 2014-04-28 00:00:00
----
-
-In the Fall 2013 semester, I took a Software Engineering class. After a few weeks studying about development lifecycles, scheduling techniques, and such, we split the class into groups to propose and develop large software projects. I joined the team that was building a system that would track printer usage, display status, and collect statistics. Having previously created [a printer status project](http://mathcs.muhlenberg.edu/~bb246500/printers/), I found the idea intriguing.
-
-<!--more-->
-
-Several of the team members had experience using GitHub, so we decided to [create an organization](https://github.com/printerSystemCSI210) to store documents and provide version control. We had the school Math/CS department web server running Apache available for web hosting. Additionally, I had experience with [Node.js](http://nodejs.org) running on [Heroku](https://www.heroku.com/), so we had that technology in our arsenal as well.
-
-One of the first challenges we encountered that would have an impact on our architecture was the fact that most printers do not have public IP addresses and thus would need to be queried from inside the local network, while we wanted the public-facing site to be accessible regardless of physical location. This led us to developing the concept of an API which would enable a master database to be queried and updated by various components. In developing an API-central infrastructure, we were also looking down th line towards supporting client-developed applications and native applications for various platforms (iOS, Android, Windows, OS X).
-
-<p style="text-align:center">
- <a href="/assets/images/forest_interaction_diagram.png">
- <img src="/assets/images/forest_interaction_diagram.png" alt="Forest Interaction Diagram">
- </a>
-</p>
-
-Our first task was to develop a data format and database schema. As we intended to use [actionhero](http://actionherojs.com) for the API server, we created a [schema for MongoDB](https://github.com/printerSystemCSI210/api-server/blob/master/initializers/_project.js) and a base [set of API commands](https://github.com/printerSystemCSI210/api-server/tree/master/actions) we would need to implement in order to get a framework of the service up and running. We [deployed this on Heroku](https://forest-api.herokuapp.com).
-
-Simultaneously, we began work on a [web frontend](https://github.com/printerSystemCSI210/frontend) [hosted on the Math/CS server](http://mathcs.muhlenberg.edu/~mb247142/forest/frontend/home.php) that would communicate with the API to display graphs using [chart.js](http://www.chartjs.org). You can make an account here and add printers, though the interface is probably still a bit buggy.
-
-Additionally, we created a [Ruby program](https://github.com/printerSystemCSI210/query-agent) that would be running on the local network and would pull printer addresses from the API and query their status and properties via SNMP and push this information back to the API at a specified interval. We began working on bundling the gem as a standalone application using [Omnibus](https://github.com/opscode/omnibus-ruby), but due to lack of time at the end of the semester, this was never completed.
-
-At the end of the semester, we had built three interacting components, each using a different technology (Node.js/Mongoose, PHP/Apache, Ruby). You can [read our final Venture Proposal (pdf)](/assets/pdf/forest_venture_proposal.pdf). While all of our components communicated over HTTP using JSON, it’s worth noting that actionhero supports socket connections over TCP/TLS, which would have been a better choice for some of our infrastructure in production. We decided to use HTTP since it was easier to deploy on Heroku’s free tier and easier to interact with without writing additional components in Ruby and PHP.
-
-We’ve talked about continuing to develop the project beyond the class, but no progress has really been made. It’s probably possible to get a working monitoring system up and running based off our code (which is [all on GitHub](https://github.com/printerSystemCSI210)), but it would require quite a bit of legwork as it currently stands.
diff --git a/_posts/2014-05-01-migrating-to-github-pages-and-jekyll.markdown b/_posts/2014-05-01-migrating-to-github-pages-and-jekyll.markdown
deleted file mode 100644
index b1bf2e6..0000000
--- a/_posts/2014-05-01-migrating-to-github-pages-and-jekyll.markdown
+++ /dev/null
@@ -1,44 +0,0 @@
----
-layout: post
-title: Migrating to GitHub Pages and Jekyll
-description: How I moved my website to GitHub Pages using the Jekyll static site generator in under three hours.
-category: writing
-date: 2014-05-01 00:00:00
-redirect_from: "/writing/migrating-to-github-pages-and-jekyll/"
----
-
-I’ve always been a fan of using [Markdown](http://daringfireball.net/projects/markdown/) to create web content. Several years ago, I created [MDEngine](/projects/mdengine/), a small PHP script to render Markdown files in HTML dynamically. For a while, it was responsible for much of the content on my website. In October 2013, I began work on a fresh design. I decided to use a custom Node.js app deployed on Heroku for processing the Markdown. While this worked effectively, I always had some reservations.
-
-<!--more-->
-
-While my site was decently fast, there was no real reason that it needed to be dynamically generated. I was particularly concerned with the performance of the two list pages, whose backend logic consisted of parsing an entire directory of Markdown files each time it was loaded. Though there was no noticeable performance impact, it was not inconceivable that the page generation time would increase substantially as content grew.
-
-In late April 2014, I made some design updates to the site running on Heroku. I decided to take the opportunity to address my performance concerns as well. While my original intent was to simply clean up the server logic I had written, I realized that it would be more sustainable in the long term to migrate to a true static site using [Jekyll](http://jekyllrb.com).
-
-## The Setup
-
-Installing Jekyll locally was a piece of cake; simply running `gem install jekyll` did the trick. I already had a placeholder page in my [benburwell.github.io repo](https://github.com/benburwell/benburwell.github.io), so I `cd`’d to the parent directory and ran `jekyll new benburwell.github.io` to overwrite the old content.
-
-For those unfamiliar with [GitHub Pages](https://pages.github.com), anything that you put in a repo named `[your username].github.io` will automatically be served from that URL. You can also create branches named `gh-pages` in your other repos to serve project-specific sites. In addition to serving static content, GitHub Pages will automatically compile sites generated with Jekyll.
-
-## Porting Content
-
-Next came what was probably the most time-consuming part of the whole process: converting the [Jade](http://jade-lang.com) layout into pure HTML with [Liquid](http://liquidmarkup.org) markup. Luckily, this wasn’t too painful, and I came out with [two layouts](https://github.com/benburwell/benburwell.github.io/tree/master/_layouts), page structure and navigation, and the other for displaying Posts.
-
-My next challenge was to maintain my link structure so nothing would be broken. The one exception I conceded to was my résumé, a PDF file that I had been serving from `/resume/` using Express (admittedly a pretty poor idea). After exploring the Jekyll documentation, I discovered that an easy way to separate out my content into Writing and Projects as I’ve done on my site was to use the built-in category functionality. I would simply create two category pages at [`/writing/index.html`](https://github.com/benburwell/benburwell.github.io/blob/master/writing/index.html) and [`/projects/index.html`](https://github.com/benburwell/benburwell.github.io/blob/master/projects/index.html) to render a list of posts from their respective categories, and tag each Markdown document with the appropriate category. The final step was to define my permalink structure in `_config.yml` which I did by adding `permalink: /:categories/:title/` to the file.
-
-I next had the pleasure of renaming all of my content files to adhere to Jekyll’s naming convention (`YYYY-MM-DD-hyphen-separated-title.markdown`) and adding/modifying the front matter as necessary.
-
-## Additional Configuration
-
-I decided to [enable the `jekyll-sitemap` plugin](https://help.github.com/articles/using-jekyll-plugins-with-github-pages) by adding `jekyll-sitemap` as a gem to `_config.yml`. This plugin will generate [an XML sitemap](http://www.sitemaps.org) that can be used by crawlers such as those run by search engines to help determine what content needs to be indexed.
-
-I moved my error page over and quickly translated the Jade to Markdown by [following the instructions provided by GitHub](https://help.github.com/articles/custom-404-pages) for creating a custom 404 page. The only remaining issue was my stylesheet problem. In my Express app, I used [Less](http://lesscss.org) for writing my stylesheets. As of this writing, Jekyll does not support compiled stylesheet languages like Less, though [there is the suggestion of future support](http://jekyllrb.com/docs/assets/) for Sass and CoffeeScript.
-
-For now, I’m keeping my stylesheets in `/assets/less/` and compiling them down to a CSS file locally after making changes with `lessc --clean-css style.less ../css/style.css`. While this certainly isn’t perfect, it allows me to keep my Less files intact and to serve minified CSS.
-
-## Conclusion
-
-All in all, the process went very smoothly. I made [the first Jekyll commit](https://github.com/benburwell/benburwell.github.io/tree/042ebd011194592ec155181dc41976493a07e54a) at 18:52 and [changed my DNS records from Heroku](https://github.com/benburwell/benburwell.github.io/tree/35c2061dd13427b1b48525321f7f0156f0b83863) at 21:20, spending about two and a half hours learning Jekyll and converting my site over. This is a pretty rapid deployment — kudos to Jekyll for building such an easy tool.
-
-As far as the future goes, I’d like to see GitHub pages provide native support for a stylesheet language, be it Less, Sass, or some other one. Additionally, I’d like to see an HTML minification plugin (a minor optimization, but not unreasonable). For the time being, I’m quite happily serving this site with GitHub Pages.
diff --git a/_posts/2014-05-01-migrating-to-github-pages-and-jekyll.md b/_posts/2014-05-01-migrating-to-github-pages-and-jekyll.md
new file mode 100644
index 0000000..8344c0b
--- /dev/null
+++ b/_posts/2014-05-01-migrating-to-github-pages-and-jekyll.md
@@ -0,0 +1,105 @@
+---
+title: Migrating to GitHub Pages and Jekyll
+description: >
+ How I moved my website to GitHub Pages using the Jekyll static site generator
+ in under three hours.
+---
+
+I’ve always been a fan of using
+[Markdown](http://daringfireball.net/projects/markdown/) to create web content.
+Several years ago, I created [MDEngine](/projects/mdengine/), a small PHP script
+to render Markdown files in HTML dynamically. For a while, it was responsible
+for much of the content on my website. In October 2013, I began work on a fresh
+design. I decided to use a custom Node.js app deployed on Heroku for processing
+the Markdown. While this worked effectively, I always had some reservations.
+
+<!--more-->
+
+While my site was decently fast, there was no real reason that it needed to be
+dynamically generated. I was particularly concerned with the performance of the
+two list pages, whose backend logic consisted of parsing an entire directory of
+Markdown files each time it was loaded. Though there was no noticeable
+performance impact, it was not inconceivable that the page generation time would
+increase substantially as content grew.
+
+In late April 2014, I made some design updates to the site running on Heroku. I
+decided to take the opportunity to address my performance concerns as well.
+While my original intent was to simply clean up the server logic I had written,
+I realized that it would be more sustainable in the long term to migrate to a
+true static site using [Jekyll](http://jekyllrb.com).
+
+## The Setup
+
+Installing Jekyll locally was a piece of cake; simply running `gem install jekyll` did the trick. I already had a placeholder page in my
+[benburwell.github.io repo](https://github.com/benburwell/benburwell.github.io),
+so I `cd`’d to the parent directory and ran `jekyll new benburwell.github.io` to
+overwrite the old content.
+
+For those unfamiliar with [GitHub Pages](https://pages.github.com), anything
+that you put in a repo named `[your username].github.io` will automatically be
+served from that URL. You can also create branches named `gh-pages` in your
+other repos to serve project-specific sites. In addition to serving static
+content, GitHub Pages will automatically compile sites generated with Jekyll.
+
+## Porting Content
+
+Next came what was probably the most time-consuming part of the whole process:
+converting the [Jade](http://jade-lang.com) layout into pure HTML with
+[Liquid](http://liquidmarkup.org) markup. Luckily, this wasn’t too painful, and
+I came out with [two
+layouts](https://github.com/benburwell/benburwell.github.io/tree/master/_layouts),
+page structure and navigation, and the other for displaying Posts.
+
+My next challenge was to maintain my link structure so nothing would be broken.
+The one exception I conceded to was my résumé, a PDF file that I had been
+serving from `/resume/` using Express (admittedly a pretty poor idea). After
+exploring the Jekyll documentation, I discovered that an easy way to separate
+out my content into Writing and Projects as I’ve done on my site was to use the
+built-in category functionality. I would simply create two category pages at
+[`/writing/index.html`](https://github.com/benburwell/benburwell.github.io/blob/master/writing/index.html)
+and
+[`/projects/index.html`](https://github.com/benburwell/benburwell.github.io/blob/master/projects/index.html)
+to render a list of posts from their respective categories, and tag each
+Markdown document with the appropriate category. The final step was to define my
+permalink structure in `_config.yml` which I did by adding `permalink: /:categories/:title/` to the file.
+
+I next had the pleasure of renaming all of my content files to adhere to
+Jekyll’s naming convention (`YYYY-MM-DD-hyphen-separated-title.markdown`) and
+adding/modifying the front matter as necessary.
+
+## Additional Configuration
+
+I decided to [enable the `jekyll-sitemap`
+plugin](https://help.github.com/articles/using-jekyll-plugins-with-github-pages)
+by adding `jekyll-sitemap` as a gem to `_config.yml`. This plugin will generate
+[an XML sitemap](http://www.sitemaps.org) that can be used by crawlers such as
+those run by search engines to help determine what content needs to be indexed.
+
+I moved my error page over and quickly translated the Jade to Markdown by
+[following the instructions provided by
+GitHub](https://help.github.com/articles/custom-404-pages) for creating a custom
+404 page. The only remaining issue was my stylesheet problem. In my Express app,
+I used [Less](http://lesscss.org) for writing my stylesheets. As of this
+writing, Jekyll does not support compiled stylesheet languages like Less, though
+[there is the suggestion of future support](http://jekyllrb.com/docs/assets/)
+for Sass and CoffeeScript.
+
+For now, I’m keeping my stylesheets in `/assets/less/` and compiling them down
+to a CSS file locally after making changes with `lessc --clean-css style.less ../css/style.css`. While this certainly isn’t perfect, it allows me to keep my
+Less files intact and to serve minified CSS.
+
+## Conclusion
+
+All in all, the process went very smoothly. I made [the first Jekyll
+commit](https://github.com/benburwell/benburwell.github.io/tree/042ebd011194592ec155181dc41976493a07e54a)
+at 18:52 and [changed my DNS records from
+Heroku](https://github.com/benburwell/benburwell.github.io/tree/35c2061dd13427b1b48525321f7f0156f0b83863)
+at 21:20, spending about two and a half hours learning Jekyll and converting my
+site over. This is a pretty rapid deployment — kudos to Jekyll for building such
+an easy tool.
+
+As far as the future goes, I’d like to see GitHub pages provide native support
+for a stylesheet language, be it Less, Sass, or some other one. Additionally,
+I’d like to see an HTML minification plugin (a minor optimization, but not
+unreasonable). For the time being, I’m quite happily serving this site with
+GitHub Pages.
diff --git a/_posts/2014-05-03-printing-at-muhlenberg.markdown b/_posts/2014-05-03-printing-at-muhlenberg.markdown
deleted file mode 100644
index 34d8784..0000000
--- a/_posts/2014-05-03-printing-at-muhlenberg.markdown
+++ /dev/null
@@ -1,22 +0,0 @@
----
-layout: post
-title: Enhancing Printing at Muhlenberg
-description: Avoiding frustration and wasted paper by providing remote status reporting and logical DNS names.
-category: writing
-date: 2014-05-03 00:00:00
-redirect_from: "/writing/printing-at-muhlenberg/"
----
-
-A common frustration of Muhlenberg students is to print a document to a dorm printer only to find that the printer had no paper when going to collect it. This leads to both frustration and wasted paper, since when more paper is put into the printer, it will print out all the queued jobs from when the tray was empty. By that time, students have often given up and printed their document to another printer.
-
-<!--more-->
-
-To avoid this, I created a web page that [reports the status of Muhlenberg printers](http://mathcs.muhlenberg.edu/~bb246500/printers/). The PHP script queries the printers to determine the status of their trays. If you’d like to see other printers added, let me know [by email](mailto:hi@benburwell.com) or [on Twitter](https://twitter.com/intent/tweet?text=@bburwell).
-
-## DNS Names
-
-To facilitate printing from personal computers, I created DNS records for several printers which enable them to be configured with a logical name rather than by IP address. Currently, the following printers/DNS names are available:
-
-* `trumbower48.print.muhlenberg.benburwell.com`
-* `trumbower125.print.muhlenberg.benburwell.com`
-* `trumbower147.print.muhlenberg.benburwell.com`
diff --git a/_posts/2014-05-03-printing-at-muhlenberg.md b/_posts/2014-05-03-printing-at-muhlenberg.md
new file mode 100644
index 0000000..aa33739
--- /dev/null
+++ b/_posts/2014-05-03-printing-at-muhlenberg.md
@@ -0,0 +1,31 @@
+---
+title: Enhancing Printing at Muhlenberg
+description: >
+ Avoiding frustration and wasted paper by providing remote status reporting and
+ logical DNS names.
+---
+
+A common frustration of Muhlenberg students is to print a document to a dorm
+printer only to find that the printer had no paper when going to collect it.
+This leads to both frustration and wasted paper, since when more paper is put
+into the printer, it will print out all the queued jobs from when the tray was
+empty. By that time, students have often given up and printed their document to
+another printer.
+
+<!--more-->
+
+To avoid this, I created a web page that [reports the status of Muhlenberg
+printers](http://mathcs.muhlenberg.edu/~bb246500/printers/). The PHP script
+queries the printers to determine the status of their trays. If you’d like to
+see other printers added, let me know [by email](mailto:hi@benburwell.com) or
+[on Twitter](https://twitter.com/intent/tweet?text=@bburwell).
+
+## DNS Names
+
+To facilitate printing from personal computers, I created DNS records for
+several printers which enable them to be configured with a logical name rather
+than by IP address. Currently, the following printers/DNS names are available:
+
+- `trumbower48.print.muhlenberg.benburwell.com`
+- `trumbower125.print.muhlenberg.benburwell.com`
+- `trumbower147.print.muhlenberg.benburwell.com`
diff --git a/_posts/2014-05-31-less-file-compilation-for-jekyll-github-pages.markdown b/_posts/2014-05-31-less-file-compilation-for-jekyll-github-pages.markdown
deleted file mode 100644
index 84eeaac..0000000
--- a/_posts/2014-05-31-less-file-compilation-for-jekyll-github-pages.markdown
+++ /dev/null
@@ -1,26 +0,0 @@
----
-layout: post
-title: LESS File Compilation for Jekyll and GitHub Pages
-description: Git’s pre-commit hook allows one-click static site deployment — including LESS file compilation — to GitHub pages.
-category: writing
-date: 2014-05-31 00:00:00
-redirect_from: "/writing/less-file-compilation-for-jekyll-github-pages/"
----
-
-I recently wrote about [migrating my website to GitHub Pages](/writing/migrating-to-github-pages-and-jekyll) and noted that I wasn’t completely satisfied with my deployment workflow. Ideally, [creating a build should be done in a single step](http://www.joelonsoftware.com/articles/fog0000000043.html). As I wrote, my previous build workflow required me to manually compile my [LESS](http://lesscss.org) files before committing if I’d made changes. While my stylesheet doesn’t change often, this method is certainly not ideal.
-
-<!--more-->
-
-Using [Git hooks](http://git-scm.com/book/en/Customizing-Git-Git-Hooks), it’s possible to run a script at certain points during the Git workflow. To take advantage of this in my case, I added a small bash script to `.git/hooks/pre-commit`:
-
-{% highlight bash %}
-#!/bin/sh
-
-export PATH=/usr/local/bin:$PATH
-cd /Users/Ben/Documents/Code/benburwell.github.io/assets/less
-lessc --clean-css style.less ../css/style.css
-cd /Users/Ben/Documents/Code/benburwell.github.io
-git add /Users/Ben/Documents/Code/benburwell.github.io/assets/css/style.css
-{% endhighlight %}
-
-This is a pretty rough script, but it gets the job done for me. For a much more thorough script, see [this article by TJ VanToll](http://tjvantoll.com/2012/07/07/the-ideal-less-workflow-with-git/).
diff --git a/_posts/2014-05-31-less-file-compilation-for-jekyll-github-pages.md b/_posts/2014-05-31-less-file-compilation-for-jekyll-github-pages.md
new file mode 100644
index 0000000..221f01e
--- /dev/null
+++ b/_posts/2014-05-31-less-file-compilation-for-jekyll-github-pages.md
@@ -0,0 +1,36 @@
+---
+title: LESS File Compilation for Jekyll and GitHub Pages
+description: >
+ Git’s pre-commit hook allows one-click static site deployment — including LESS
+ file compilation — to GitHub pages.
+---
+
+I recently wrote about [migrating my website to GitHub
+Pages](/writing/migrating-to-github-pages-and-jekyll) and noted that I wasn’t
+completely satisfied with my deployment workflow. Ideally, [creating a build
+should be done in a single
+step](http://www.joelonsoftware.com/articles/fog0000000043.html). As I wrote, my
+previous build workflow required me to manually compile my
+[LESS](http://lesscss.org) files before committing if I’d made changes. While my
+stylesheet doesn’t change often, this method is certainly not ideal.
+
+<!--more-->
+
+Using [Git hooks](http://git-scm.com/book/en/Customizing-Git-Git-Hooks), it’s
+possible to run a script at certain points during the Git workflow. To take
+advantage of this in my case, I added a small bash script to
+`.git/hooks/pre-commit`:
+
+```
+#!/bin/sh
+
+export PATH=/usr/local/bin:$PATH
+cd /Users/Ben/Documents/Code/benburwell.github.io/assets/less
+lessc --clean-css style.less ../css/style.css
+cd /Users/Ben/Documents/Code/benburwell.github.io
+git add /Users/Ben/Documents/Code/benburwell.github.io/assets/css/style.css
+```
+
+This is a pretty rough script, but it gets the job done for me. For a much more
+thorough script, see [this article by TJ
+VanToll](http://tjvantoll.com/2012/07/07/the-ideal-less-workflow-with-git/).
diff --git a/_posts/2014-09-30-what-is-two-factor-authentication-and-why-does-it-matter.markdown b/_posts/2014-09-30-what-is-two-factor-authentication-and-why-does-it-matter.markdown
deleted file mode 100644
index fb9c5a7..0000000
--- a/_posts/2014-09-30-what-is-two-factor-authentication-and-why-does-it-matter.markdown
+++ /dev/null
@@ -1,29 +0,0 @@
----
-layout: post
-title: What is Two-Factor Authentication and Why Does it Matter?
-description: As more web services allow users to enable two-factor authentication (2FA), it's important to understand how it helps secure your accounts.
-date: 2014-09-30 00:00:00
-category: writing
-image: http://www.benburwell.com/assets/images/padlock.png
-redirect_from: "/writing/what-is-two-factor-authentication-and-why-does-it-matter/"
----
-
-With subversions of the security measures of cloud-based services on the rise, many service providers are implementing a strategy known as multi-factor authentication or simply educating their users about the implementations they’ve had for years.
-
-<!--more-->
-
-So what exactly is it? While logging in to an account usually only requires you to enter the proper password, two-factor authentication, or 2FA for short, relies on multiple different ways of proving your identity. In general, the three types of identification are _knowledge_ (something you know), _posession_ (something you have), and _inherence_ (something you are). Typical 2FA schemas require the presentation of two of these “factors” in order to authenticate.
-
-The knowledge factor is the most popularly understood and includes passwords or passphrases, PINs, and secret patterns. Essentially, 2FA is an authentication scheme that combats the multitude of ways an attacker might gain your password by introducing another — usually posession — factor. It’s easy to imagine a scenario in which your password could be compromised, whether it’s an attacker brute-force guessing, using the same password for multiple purposes, social engineering attacks such as phishing, or any other means. However, it is unlikely that any of these attackers who gain access to your password will be in sufficient physical proximity to steal or even just see your access token.
-
-A posession factor can take many forms. A simple example of a posession factor is the key that you might use to unlock your door. One posession factor used in electronic systems is a small token such as the RSA SecurID that has an LCD screen that displays a new number every 30 to 60 seconds. The number is generated with cryptographic functions such that both the authentication server and the token will know the same number simultaneously but it is mathematically hard to predict the next number in the sequence given all previous data. Therefore, by entering the number displayed on the token, you can prove to the server that you are indeed in posession of the token. Myriad other posession factors, each with varying resistance to forgery, include USB tokens, magnetic stripe cards, RFID, and smart cards.
-
-Another common approach to the posession factor is the use of SMS. The authentication server will text a code to the user’s known phone number and expect that code to be entered in order to access the protected resource. This process has evolved with smart phones to leverage push notification technology. Rather than SMS, the authentication server sends a push notification to the user’s preregistered smartphone where they can confirm or deny the access requested. Perhaps the most common form of 2FA currently being deployed for cloud services is a time-based one-time password scheme. This allows a smartphone app to act as a physical token by generating a time-based password that is supplied as the posession factor.
-
-A typical implementation of the time-based one-time password (TOTP) algorithm as defined by [RFC 6238](http://tools.ietf.org/html/rfc6238) consists of the following steps:
-
-1. The authentication server generates a cryptographic key and shares it securely with the client such as by scanning a QR code.
-2. The client and server [agree upon several parameters](http://en.wikipedia.org/wiki/Time-based_One-time_Password_Algorithm#Implementation) needed to generate the token.
-3. The server prompts the user for the generated token to verify that the token is being generated correctly.
-
-Many websites allow their users to enable 2FA as an additional layer of protection for their accounts. Doing so should add an exponentially larger challenge for attackers, making unauthorized access extremely unlikely. To start using 2FA, many service providers will suggest using the [Google Authenticator app](https://support.google.com/accounts/answer/1066447?hl=en). You can also visit [twofactorauth.org](https://twofactorauth.org) for a list of sites that offer 2FA.
diff --git a/_posts/2014-10-10-open-bug-tracking-empowers-users.markdown b/_posts/2014-10-10-open-bug-tracking-empowers-users.markdown
deleted file mode 100644
index 99493dd..0000000
--- a/_posts/2014-10-10-open-bug-tracking-empowers-users.markdown
+++ /dev/null
@@ -1,29 +0,0 @@
----
-layout: post
-title: Open Bug Tracking Empowers Users, but it Hasn’t Been Perfected
-description: Allowing users to view the status of bugs and file bug reports contributes to development, but we’re not all the way there yet.
-date: 2014-10-10 00:00:00
-category: writing
-image: http://www.benburwell.com/assets/images/heisenbug.svg
-redirect_from: "/writing/open-bug-tracking-empowers-users/"
----
-
-
-The rise of networked information economy described by Yochai Benkler has both enabled and been enabled by the free software movement. A central facet of this emerging culture is its [participatory nature](http://www.benkler.org/Benkler_Wealth_Of_Networks.pdf). This is reflected in the flagships of the free culture movement such as Wikipedia, where the time and expertise of many is combined to create a competitive alternative to commercial offerings. Though free software aims to be a participatory medium, due to the highly technical and often specialized nature, the barriers to entry for the average “netizen” are often relatively high.
-
-<!--more-->
-
-In order to organize any software project a separate program called a bug or issue tracker is often used to categorize and track the progress of bugs in the code, almost like a giant and detailed to-do list. A bug tracker could be anything from a spreadsheet to a sophisticated routing and tracking system that integrates with help desk software and documentation. The larger a project is, the more [crucial it becomes to use a bug tracker](http://www.joelonsoftware.com/articles/fog0000000029.html), as it facilitates the necessary communication between developers and testers as to who is working on what at any given time. Often, bug trackers are accessed by members of the software team through a web browser, as this is an easy and efficient way to collaborate. However, with traditional commercial software, the contents of the bug tracker tends to be considered highly sensitive, proprietary information that should not be disclosed in order to protect the company’s competitive advantage. After all, exposing to the public issues with the software might result in them switching to a competitor. However, for free or open source software projects, using a publicly available bug tracker that is open to anyone who wants to see it is the de-facto standard, and rightfully so.
-
-There are fundamentally two ways that using an open bug tracker benefits an open source project. First and foremost, as Benkler points out, free software projects tend to have a wide variety of contributors. From gurus who know the code base inside and out, to casual hackers, every contributor brings a different level of skill and expertise to the project. In software engineering, it is commonly understood that you cannot expect everyone to have identical skills such that their role in the team is totally flexible. To maximize the team’s productivity, it is best to take into account each member’s skill set and expertise. For example, if one of the gurus of a mature open-source project finds an error in some of the help files, it would probably be a waste of time to go and fix it when they could be spending their time addressing a highly complex and sophisticated bug or issue with the code. Rather, filing a bug report would allow a newcomer to the project who has not yet accumulated the technical sophistication to address the documentation. In free and open source software, the bug tracker is how this organization and self-assignment of tasks takes place.
-
-In addition to facilitating communication and task assignment, the use of an open bug tracker reduces the barriers to participation in the project by non-technical people, the “users.” By allowing anyone to report an issue they have while using the software increases the feedback to the development team, which is especially important in a free-software context where alpha testing may simply be getting people to download and try out a release candidate rather than relying on an in-house test team. Furthermore, anyone can easily check on the status of an issue they experienced; if you want to know when that annoying problem you had the other day will be fixed, checking the bug tracker will tell you (or at least hopefully give you an idea). Thus, not only does using an openly available bug tracker provide free and open source software with some of the same infrastructure enjoyed by commercial software projects, but it also enables the project to harness the power of the collective in a truly “networked information economy” way.
-
-In addition to facilitating the project, using an open bug tracking system makes an important statement about the collective ownership of the project. By allowing anyone to interact with the system that steers the project as it progresses, the collective nature transcends lofty philosophical ideals and actually puts them into physical form. The enabling of a participatory culture in this way cements the collective ownership and collective developmen of the intellectual property created by the free or open source project.
-With the increasing amount of popular open-source software, it seems that everyday users of software are becoming accustomed to or even expecting that bug tracking is available. Recently, Microsoft announced the preview of Windows 10, the next version of its operating system. Windows 10 incorporates [a “Feedback” app](http://www.theregister.co.uk/2014/10/08/early_windows_10_feedback_for_microsoft/) which attempts to gather user feedback on problems that they experience, and publicly viewable suggestions that other users have made. One suggestion stated, “If Microsoft would like its customers to do free software testing, could they at least provide a proper bug-tracking tool with security level and current status.” Granted, the early adopters of a not-yet-released operating system are probably not typical users, but the desire to have access to Microsoft’s internal systems not to steal corporate secrets but to improve their products is an interesting juxtaposition of free software and hacker culture with one of the largest commercial software companies.
-
-And other companies are taking the cue. [Atlassian](https://www.atlassian.com/) is an Australian company that makes tools for developers, including a Git server ([Stash](https://www.atlassian.com/software/stash)), issue tracker ([JIRA](https://www.atlassian.com/software/jira)), and other products. Their [philosophy](https://www.atlassian.com/company/about/values) is rooted in the hacker ethic, that information should be free and available (unless there is some need to keep it private). While their products are not open source in the sense that anyone can download and contribute to the software, they do provide you with the complete source code when you buy their software [so that you can customize it to fit your needs](https://www.atlassian.com/end-user-agreement/). Furthermore, their bug tracking system is [completely open to the public](https://jira.atlassian.com/secure/Dashboard.jspa). Anyone can file bugs, add comments, track the progress, and see when it might be implemented. In addition to providing a live demonstration of their product, it helps communicate with their customers about what features are coming up and to be responsive to their requests. Again, this example is atypical in providing public access to their bug tracker in that their customers generally use bug tracking software every day at work. Their model seems to have been successful at bridging the gap between a completely closed ecosystem like Microsoft’s and an open-source model where it can be hard for a company to make a profit.
-
-This approach to bug tracking is still not perfect. For Atlassian, it works because of the technical knowledge their customers have, but seeing a page from a bug tracker like Atlassian JIRA or the open-source Bugzilla would still be confusing for the average person. Microsoft seems to have swung too far in the opposite direction with their Feedback tool; while it’s certainly easy to report an issue, there is no follow-up as to whether the issue is being addressed. What we’re missing is the tool that makes bug tracking accessible to everyone, regardless of technical background.
-
-Unsurprisingly, bug tracking is not often the focus of open source projects—they are generally much more focused on writing software than they are on developing a system for collaboration and user feedback. Thus, by picking a free, off-the-shelf issue tracker, they simplify the lives of the developers without taking into account the users. This is a common pitfall of open-source projects; many begin as one or two hackers who want to build something cool or useful, and it grows into a product while maintaing the hacker-centric mindset rather than focusing development around user requirements by assuming that the users and the developers are the same group of people. This is the crucial link for bug tracking in open source projects, it links the users to the developers. The problem is that it usually does so in a developer-centric way such that while it’s theoretically possible for users to report or check the status of bugs, they typically don’t know that they can, or the process is too complicated to follow. Only when this disconnect is bridged will free and open source software be truly participatory for the masses, rather than just for the technologically skilled.
diff --git a/_posts/2014-10-11-configuring-cloudflare-universal-ssl.markdown b/_posts/2014-10-11-configuring-cloudflare-universal-ssl.markdown
deleted file mode 100644
index 96f23e1..0000000
--- a/_posts/2014-10-11-configuring-cloudflare-universal-ssl.markdown
+++ /dev/null
@@ -1,49 +0,0 @@
----
-layout: post
-title: Configuring CloudFlare’s Universal SSL
-description: CloudFlare recently began enabling SSL for all its customers. Here’s how to leverage the CDN to make your website faster and more secure.
-date: 2014-10-11 00:00:00
-category: writing
-image: https://www.benburwell.com/assets/images/universal-ssl.png
-redirect_from: "/writing/configuring-cloudflare-universal-ssl/"
----
-
-On September 29, 2014, [CloudFlare](https://www.cloudflare.com/), a web security company and CDN provider, [announced](http://blog.cloudflare.com/introducing-universal-ssl/) that they would begin offering free, automatic SSL to all its customers (including those on their free plan). This is an enormous step forward for enhancing security and privacy on the Internet; while website owners would previously need to purchase an SSL certificate for their site and often pay extra for SSL hosting, CloudFlare now makes this all free. Plus, you get the benefits of their other services such as DDoS protection.
-
-<!--more-->
-
-I’ve previously written about [hosting static sites with GitHub Pages](https://www.benburwell.com/writing/migrating-to-github-pages-and-jekyll/), which is what I use for www.benburwell.com. GitHub provides SSL hosting for its static sites, but not with custom domain names (e.g. `https://example.github.io` but `http://example.com`). Using CloudFlare, it’s possible to use `https://example.com` for free. And as a bonus, you won’t need to worry about DNS hosting either.
-
-What is CloudFlare?
--------------------
-
-CloudFlare works by having all of the traffic for your site routed through CloudFlare’s network, which provides CDN services such as caching of static resources, as well as security options like DDoS protection and a Web Application Firewall (WAF). You’ll need to import your DNS records to CloudFlare and specify CloudFlare’s DNS servers with your domain registrar to facilitate the service. Other nice features include apex `CNAME` records using the `@` character ([traditionally challenging](http://stackoverflow.com/a/16041655)), as well as IPv6 DNS support.
-
-
-Setting Up Free, Universal SSL with GitHub Pages
-------------------------------------------------
-
-_(Note: you can really do this with any host, but I’m going to be describing how I did this with my site.)_
-
-To get started, head over to [CloudFlare](https://www.cloudflare.com/sign-up) and create an account. Next, you’ll specify the website you want to use CloudFlare with (be sure to use your custom DNS name, not `you.github.io`). You’ll have to wait for a few minutes as CloudFlare scrapes your DNS records. Be sure all of them are there, as any that aren’t will cease to be valid once you enable CloudFlare.
-
-Next, head over to your registrar and and change your authoritative name servers to the ones listed in CloudFlare to start routing your traffic through their network. This will take some time to propagate through the DNS network, but should be effective within a few hours. In the meantime, you can take a look at the three Settings pages. There are many options for optimization, redirects, caching, security, and more. The important one is to go down to the SSL option and set it to Flexible SSL. Note that even though you can access your GitHub pages site over SSL, trying to do so with full SSL through CloudFlare will result in an “Unknown Site” error from GitHub.
-
-<aside>
- <p>
- <em>Update on 22 May, 2015:</em>
- Since this article was published, CloudFlare has <a href="https://support.cloudflare.com/hc/en-us/articles/205075117-FAQ-New-CloudFlare-Dashboard">updated their dashboard</a>. Now, the settings for SSL are located under the <a href="https://www.cloudflare.com/a/crypto">"Crypto" tab</a> for your website. The page rules as described below are still configured the same way, but now found under the <a href="https://www.cloudflare.com/a/page-rules">"Page Rules" tab</a>.
- </p>
-</aside>
-
-On the free tier, CloudFlare states that it will take up to 24 hours to provision the SSL certificate for your site. In my case, it only took a few hours. Using one of their paid plans will result in immediate provision. You can check in on whether the certificate has been provisioned by trying to navigate to https://yoursite.com. You’ll likely get a domain mismatch SSL error as CloudFlare defaults to a different certificate until yours has been provisioned. Once you stop receiving the error, you’re good to go!
-
-The final step is to set up Page Rules (of which you get three for free) to redirect visitors to the non-secure site to the SSL one. Go to [My Websites](https://www.cloudflare.com/my-websites) and click Page Rules under the gear icon. Enter the URL patterns to match and flip the “Always use https” to ON.
-
-<p style="text-align:center">
- <a href="/assets/images/cloudflare_ssl_page_rules.png">
- <img src="/assets/images/cloudflare_ssl_page_rules.png" alt="Sample CloudFlare page rules for always using SSL">
- </a>
-</p>
-
-That’s it! You’ve taken an important step towards making the web browsing experience more secure and private for your visitors.
diff --git a/_posts/2014-10-11-configuring-cloudflare-universal-ssl.md b/_posts/2014-10-11-configuring-cloudflare-universal-ssl.md
new file mode 100644
index 0000000..c743004
--- /dev/null
+++ b/_posts/2014-10-11-configuring-cloudflare-universal-ssl.md
@@ -0,0 +1,85 @@
+---
+title: Configuring CloudFlare’s Universal SSL
+description: >
+ CloudFlare recently began enabling SSL for all its customers. Here’s how to
+ leverage the CDN to make your website faster and more secure.
+---
+
+On September 29, 2014, [CloudFlare](https://www.cloudflare.com/), a web security
+company and CDN provider,
+[announced](http://blog.cloudflare.com/introducing-universal-ssl/) that they
+would begin offering free, automatic SSL to all its customers (including those
+on their free plan). This is an enormous step forward for enhancing security and
+privacy on the Internet; while website owners would previously need to purchase
+an SSL certificate for their site and often pay extra for SSL hosting,
+CloudFlare now makes this all free. Plus, you get the benefits of their other
+services such as DDoS protection.
+
+<!--more-->
+
+I’ve previously written about [hosting static sites with GitHub
+Pages](https://www.benburwell.com/writing/migrating-to-github-pages-and-jekyll/),
+which is what I use for www.benburwell.com. GitHub provides SSL hosting for its
+static sites, but not with custom domain names (e.g. `https://example.github.io`
+but `http://example.com`). Using CloudFlare, it’s possible to use
+`https://example.com` for free. And as a bonus, you won’t need to worry about
+DNS hosting either.
+
+## What is CloudFlare?
+
+CloudFlare works by having all of the traffic for your site routed through
+CloudFlare’s network, which provides CDN services such as caching of static
+resources, as well as security options like DDoS protection and a Web
+Application Firewall (WAF). You’ll need to import your DNS records to CloudFlare
+and specify CloudFlare’s DNS servers with your domain registrar to facilitate
+the service. Other nice features include apex `CNAME` records using the `@`
+character ([traditionally challenging](http://stackoverflow.com/a/16041655)), as
+well as IPv6 DNS support.
+
+## Setting Up Free, Universal SSL with GitHub Pages
+
+_(Note: you can really do this with any host, but I’m going to be describing how
+I did this with my site.)_
+
+To get started, head over to [CloudFlare](https://www.cloudflare.com/sign-up)
+and create an account. Next, you’ll specify the website you want to use
+CloudFlare with (be sure to use your custom DNS name, not `you.github.io`).
+You’ll have to wait for a few minutes as CloudFlare scrapes your DNS records. Be
+sure all of them are there, as any that aren’t will cease to be valid once you
+enable CloudFlare.
+
+Next, head over to your registrar and change your authoritative name servers to
+the ones listed in CloudFlare to start routing your traffic through their
+network. This will take some time to propagate through the DNS network, but
+should be effective within a few hours. In the meantime, you can take a look at
+the three Settings pages. There are many options for optimization, redirects,
+caching, security, and more. The important one is to go down to the SSL option
+and set it to Flexible SSL. Note that even though you can access your GitHub
+pages site over SSL, trying to do so with full SSL through CloudFlare will
+result in an “Unknown Site” error from GitHub.
+
+<aside>
+ <p>
+ <em>Update on 22 May, 2015:</em>
+ Since this article was published, CloudFlare has <a href="https://support.cloudflare.com/hc/en-us/articles/205075117-FAQ-New-CloudFlare-Dashboard">updated their dashboard</a>. Now, the settings for SSL are located under the <a href="https://www.cloudflare.com/a/crypto">"Crypto" tab</a> for your website. The page rules as described below are still configured the same way, but now found under the <a href="https://www.cloudflare.com/a/page-rules">"Page Rules" tab</a>.
+ </p>
+</aside>
+
+On the free tier, CloudFlare states that it will take up to 24 hours to
+provision the SSL certificate for your site. In my case, it only took a few
+hours. Using one of their paid plans will result in immediate provision. You can
+check in on whether the certificate has been provisioned by trying to navigate
+to https://yoursite.com. You’ll likely get a domain mismatch SSL error as
+CloudFlare defaults to a different certificate until yours has been provisioned.
+Once you stop receiving the error, you’re good to go!
+
+The final step is to set up Page Rules (of which you get three for free) to
+redirect visitors to the non-secure site to the SSL one. Go to [My
+Websites](https://www.cloudflare.com/my-websites) and click Page Rules under the
+gear icon. Enter the URL patterns to match and flip the “Always use https” to
+ON.
+
+![Sample CloudFlare page rules for always using SSL](/assets/images/cloudflare_ssl_page_rules.png)
+
+That’s it! You’ve taken an important step towards making the web browsing
+gxperience more secure and private for your visitors.
diff --git a/_posts/2014-12-14-showoff.markdown b/_posts/2014-12-14-showoff.markdown
deleted file mode 100644
index e8cc6f1..0000000
--- a/_posts/2014-12-14-showoff.markdown
+++ /dev/null
@@ -1,19 +0,0 @@
----
-layout: post
-title: Using Showoff for Markdown Presentations
-description: Use Showoff to make slideshows and presentations in Markdown with awesome audience interactivity.
-date: 2014-12-14 00:00:00
-category: writing
----
-
-Recently, I had to give a presentation and decided to do some research on using Markdown. By coincidence, I had also been looking into [Puppet](https://puppetlabs.com), a flexible and powerful configuration manager, when I stumbled across [Showoff](https://github.com/puppetlabs/showoff), another Puppet Labs project.
-
-<!--more-->
-
-Showoff is a Ruby application that takes a Markdown file with some [special formatting](https://github.com/puppetlabs/showoff/blob/master/documentation/AUTHORING.rdoc) and transforms it into a web-accessible slideshow. As expected, you can open up a presenter view in your browser. You can also easily open up a second window to use on your projector in full screen. You can even give your audience the address for the server so they can follow along on their own screens.
-
-There are also some nice audience interactivity features, like the ability to ask questions through the web interface. These questions will be shown on the presenter's screen. Audience members also have the ability to indicate whether the presenter is moving too quickly or too slowly so that an adjustment can be made accordingly.
-
-Finally, Showoff is designed with software presentations in mind, with the ability to dynamically run Ruby, JavaScript, or Coffeescript code included in your slides. You can attach other files or labs to your slides, so audience members following along on their own devices can easily access reference materials at the appropriate time.
-
-For a small presentation like the one I was doing, a lot of the more advanced features of Showoff would have been overkill, but it still made an awesome presentation method. It was also really neat to be able to say that the slides were available on Github if anyone wanted to look at them afterwards.
diff --git a/_posts/2014-12-14-showoff.md b/_posts/2014-12-14-showoff.md
new file mode 100644
index 0000000..4d84352
--- /dev/null
+++ b/_posts/2014-12-14-showoff.md
@@ -0,0 +1,38 @@
+---
+title: Using Showoff for Markdown Presentations
+description: >
+ Use Showoff to make slideshows and presentations in Markdown with awesome
+ audience interactivity.
+---
+
+Recently, I had to give a presentation and decided to do some research on using
+Markdown. By coincidence, I had also been looking into
+[Puppet](https://puppetlabs.com), a flexible and powerful configuration manager,
+when I stumbled across [Showoff](https://github.com/puppetlabs/showoff), another
+Puppet Labs project.
+
+<!--more-->
+
+Showoff is a Ruby application that takes a Markdown file with some [special
+formatting](https://github.com/puppetlabs/showoff/blob/master/documentation/AUTHORING.rdoc)
+and transforms it into a web-accessible slideshow. As expected, you can open up
+a presenter view in your browser. You can also easily open up a second window to
+use on your projector in full screen. You can even give your audience the
+address for the server so they can follow along on their own screens.
+
+There are also some nice audience interactivity features, like the ability to
+ask questions through the web interface. These questions will be shown on the
+presenter's screen. Audience members also have the ability to indicate whether
+the presenter is moving too quickly or too slowly so that an adjustment can be
+made accordingly.
+
+Finally, Showoff is designed with software presentations in mind, with the
+ability to dynamically run Ruby, JavaScript, or Coffeescript code included in
+your slides. You can attach other files or labs to your slides, so audience
+members following along on their own devices can easily access reference
+materials at the appropriate time.
+
+For a small presentation like the one I was doing, a lot of the more advanced
+features of Showoff would have been overkill, but it still made an awesome
+presentation method. It was also really neat to be able to say that the slides
+were available on Github if anyone wanted to look at them afterwards.
diff --git a/_posts/2015-01-15-optimizing-css.markdown b/_posts/2015-01-15-optimizing-css.markdown
deleted file mode 100644
index 1f8e53a..0000000
--- a/_posts/2015-01-15-optimizing-css.markdown
+++ /dev/null
@@ -1,44 +0,0 @@
----
-title: Optimizing your CSS
-description: Boilerplate code is good, but don't forget to optimize it for your application.
-layout: post
-category: writing
-date: 2015-01-15 00:00:00
----
-
-There are probably a lot of ways that you can significantly speed up your page load times by taking a look at your CSS. Here are a couple of places to start looking.
-
-<!--more-->
-
-## Remove unused CSS rules
-
-Using frontend boilerplate like [Bootstrap](http://getbootstrap.com) for CSS or a grid system can be really helpful for prototyping pages quickly. However, in production, it's important to remove CSS rules that are not in use in order to optimize your page load times and rendering speed.
-
-On my website, I use a distilled and responsive version of the [960 grid system](http://960.gs). However, I only use a few grid widths. While there's no harm from a CSS perspective in leaving the extra, unused rules in my code, there's a major performance hit when it comes to rendering the pages in a browser.
-
-I was recently able to trim down the size of my stylesheet substantially be eliminating rules that came with 960 that weren't necessary for my site and only keeping the ones that I needed. One tool that can be really helpful for this is the [Audit tab in the Chrome Developer Tools](https://developer.chrome.com/devtools#audits) that can tell you all the CSS rules that are in effect but unused. You can also try running Google's [PageSpeed Insights](https://developers.google.com/speed/pagespeed/insights/) on your site for additional information.
-
-## Choose the location of your CSS
-
-It may make sense to put some of your CSS in the HTML `<head>` in addition to linking in an external stylesheet. Some factors to consider here are whether your pages are predominantly static or dynamic; you definitely want to be able to leverage the full potential of caching with something as rarely-changed as a stylesheet.
-
-If you use a very small amount of CSS and have mostly static content, or a single-page application, the time saved by not making that extra network request may be worthwhile. Just remember that it won't be cached, so you'll be sending your entire stylesheet each time a visitor requests a different page on your site.
-
-On my site, I use a small stylesheet in the `<head>` to load my webfonts; this allows the browser to start loading them sooner rather than having to wait for the external stylesheet to download before finding out about the webfonts.
-
-## Bonus: minified SCSS, Sass, or CSS in Jekyll layouts
-
-If you're using Jekyll, you can pretty easily include a minified SCSS segment in your layouts. I keep my font stylesheet at `_includes/fonts.scss`, so I can use the following chunk of code to include the minified version:
-
-{% highlight html %}
-{% raw %}
-<style type="text/css">
- {% capture fonts %}
- {% include fonts.scss %}
- {% endcapture %}
- {{ fonts | scssify }}
-</style>
-{% endraw %}
-{% endhighlight %}
-
-The minification is, of course, dependent on your `_config.yml`. You can [take a look at mine](https://github.com/benburwell/benburwell.github.io/blob/master/_config.yml) for reference.
diff --git a/_posts/2015-01-16-your-website-is-not-special-dont-make-visitors-make-accounts.markdown b/_posts/2015-01-16-your-website-is-not-special-dont-make-visitors-make-accounts.markdown
deleted file mode 100644
index 08a319c..0000000
--- a/_posts/2015-01-16-your-website-is-not-special-dont-make-visitors-make-accounts.markdown
+++ /dev/null
@@ -1,32 +0,0 @@
----
-title: Your Website is not Special, Don't Make Visitors Make Accounts
-description: Few things bother me more than when I am forced to make an account to have some basic interaction with a website.
-layout: post
-category: writing
-date: 2015-01-16 00:00:00
----
-
-One of my pet peeves in website usability design is forcing people to create unnecessary accounts. My recent purchase of some concert tickets from [Ticketfly](https://www.ticketfly.com) required me to make an account to buy them. For people who buy a lot of concert tickets, having an account may make a lot of sense. But for me, as someone who buys concert tickets at most once every year or two, having an account on a site that I will probably only use once is not only unnecessary, it's annoying.
-
-<!--more-->
-
-This is not to say that you shouldn't offer accounts; that would be ridiculous (depending on the type of site you are running, of course). However, in general, your users know far better than you do whether or not they actually want or will use an account. Forcing them to create an account will only drive them away. People don't like creating accounts they don't want to have. There's really no reason you can't have a "check out as guest" option.
-
-And if you do offer accounts, here are a couple of rules to follow to ensure a good user experience:
-
-1. Allow the option of using a 3rd-party identity provider (OpenID, Facebook, Google, etc.). Often, visitors don't want to have yet another username/password to remember.
-2. Don't force visitors to use a 3rd-party provider. Always have a local option. As a counter point to (1), many visitors won't want to use their Facebook/Google accounts for authenticating to other sites.
-3. Username = Email. Don't make people remember a username for your site. You may allow them to pick a username later on that can be used in lieu of their email address, e.g. as the URL for a profile page, but don't force them to use a username to log in.
-4. Don't make complicated password rules. If you do have password requirements, show them to the user *before* they try to make a password. Only telling them when their password doesn't fit your requirements causes consternation.
-5. Never *ever* limit how long a password can be (within reason, obviously you don't want to be receiving a megabyte long password). My bank limits passwords to 14 characters, which is rather absurd. Since you're hashing your passwords anyway, it's not like you need to allocate extra memory in your tables to store longer passwords.
-6. Always allow your users to close their account. This should remove all information about them from your service to the extent possible without disrupting the integrity of other information.
-
-Of course, there are technical details that you need to be watching out for that are outside the scope of this post. I'll leave it to you to make sure your implementation is secure and robust, but I'll leave you with a few general tips:
-
-* Don't invent your own crypto. This applies to protocols, hashing, encryption, everything.
-* Use [bcrypt](http://codahale.com/how-to-safely-store-a-password/). Don't use MD5!
-* Using unsecured HTTP (no SSL/TLS) is inexcusable.
-* Don't invent your own crypto.
-* *Don't invent your own crypto.*
-
-For a good overview, see [Salted Password Hashing - Doing it Right](https://crackstation.net/hashing-security.htm).
diff --git a/_posts/2015-01-16-your-website-is-not-special-dont-make-visitors-make-accounts.md b/_posts/2015-01-16-your-website-is-not-special-dont-make-visitors-make-accounts.md
new file mode 100644
index 0000000..51d9ff1
--- /dev/null
+++ b/_posts/2015-01-16-your-website-is-not-special-dont-make-visitors-make-accounts.md
@@ -0,0 +1,61 @@
+---
+title: Your Website is not Special, Don't Make Visitors Make Accounts
+description: >
+ Few things bother me more than when I am forced to make an account to have
+ some basic interaction with a website.
+---
+
+One of my pet peeves in website usability design is forcing people to create
+unnecessary accounts. My recent purchase of some concert tickets from Ticketfly
+required me to make an account to buy them. For people who buy a lot of concert
+tickets, having an account may make a lot of sense. But for me, as someone who
+buys concert tickets at most once every year or two, having an account on a site
+that I will probably only use once is not only unnecessary, it's annoying.
+
+<!--more-->
+
+This is not to say that you shouldn't offer accounts; that would be ridiculous
+(depending on the type of site you are running, of course). However, in general,
+your users know far better than you do whether or not they actually want or will
+use an account. Forcing them to create an account will only drive them away.
+People don't like creating accounts they don't want to have. There's really no
+reason you can't have a "check out as guest" option.
+
+And if you do offer accounts, here are a couple of rules to follow to ensure a
+good user experience:
+
+1. Allow the option of using a 3rd-party identity provider (OpenID, Facebook,
+ Google, etc.). Often, visitors don't want to have yet another
+ username/password to remember.
+2. Don't force visitors to use a 3rd-party provider. Always have a local option.
+ As a counter point to (1), many visitors won't want to use their
+ Facebook/Google accounts for authenticating to other sites.
+3. Username = Email. Don't make people remember a username for your site. You
+ may allow them to pick a username later on that can be used in lieu of their
+ email address, e.g. as the URL for a profile page, but don't force them to
+ use a username to log in.
+4. Don't make complicated password rules. If you do have password requirements,
+ show them to the user _before_ they try to make a password. Only telling them
+ when their password doesn't fit your requirements causes consternation.
+5. Never _ever_ limit how long a password can be (within reason, obviously you
+ don't want to be receiving a megabyte long password). My bank limits
+ passwords to 14 characters, which is rather absurd. Since you're hashing your
+ passwords anyway, it's not like you need to allocate extra memory in your
+ tables to store longer passwords.
+6. Always allow your users to close their account. This should remove all
+ information about them from your service to the extent possible without
+ disrupting the integrity of other information.
+
+Of course, there are technical details that you need to be watching out for that
+are outside the scope of this post. I'll leave it to you to make sure your
+implementation is secure and robust, but I'll leave you with a few general tips:
+
+- Don't invent your own crypto. This applies to protocols, hashing, encryption,
+ everything.
+- [Use bcrypt][bcrypt].
+- Using unsecured HTTP (no SSL/TLS) is inexcusable.
+- Don't invent your own crypto.
+- _Don't invent your own crypto._
+- **[Use bcrypt][bcrypt].**
+
+[bcrypt]: https://codahale.com/how-to-safely-store-a-password/
diff --git a/_posts/2015-03-28-reset-forgotten-password-on-luks-encrypted-ubuntu.markdown b/_posts/2015-03-28-reset-forgotten-password-on-luks-encrypted-ubuntu.markdown
deleted file mode 100644
index b3154df..0000000
--- a/_posts/2015-03-28-reset-forgotten-password-on-luks-encrypted-ubuntu.markdown
+++ /dev/null
@@ -1,38 +0,0 @@
----
-title: How to Reset a Lost Password on a LUKS-Encrypted Disk in Ubuntu Linux
-description: I recently needed to reset a lost password on an Ubuntu installation. But the LUKS encryption on the disk gave me some challenges. Here's what I did.
-layout: post
-category: writing
-date: 2015-03-28 00:00:00
----
-
-Here's the situation I recently found myself in:
-
-* Ubuntu Linux 14.10
-* Unknown password for user account
-* Unknown (but set) root password (Ubuntu's philosophy is to use `sudo` for everything)
-* LUKS encrypted filesystem (known passphrase)
-* Physical access to the computer
-
-<!--more-->
-
-I needed to reset my account password. Normally, with physical access to a machine, all bets are off when it comes to security. I tried booting up the machine into [recovery mode](https://wiki.ubuntu.com/RecoveryMode) by holding down <kbd>shift</kbd> as soon as the BIOS had finished loading. But when I selected the "Drop to root shell" option, I was prompted to enter the unknown root password.
-
-My second approach was to boot into single user mode by editing the GRUB command script.
-
-<div class="text-center"><a href="/assets/images/ubuntu-grub.png"><img src="/assets/images/ubuntu-grub.png" alt="Ubuntu's GRUB menu"></a></div>
-
-By going down to the recovery mode option and hitting <kbd>e</kbd>, you can edit the GRUB commands. By adding <code>init=/bin/bash</code> at the end of the line beginning with <code>linux</code> that specifies the boot image, you can specify an initial shell to use. Then I hit <kbd>F10</kbd> to boot.
-
-After waiting for about 30 seconds or a minute, I saw a message that waiting for the root device (the locked disk) had timed out. I was then dumped into an [initramfs](https://wiki.ubuntu.com/Initramfs) shell. From there, I was able to unlock the disk by running <code>cryptsetup luksOpen /dev/sda3 sda3_crypt</code>.
-
-Next, I mounted the freshly-unlocked disk with <code>mount -o rw /dev/sda3 /root</code>, taking advantage of the pre-existing empty directory. From there, I used <code>chroot</code> to run <code>passwd</code> in the OS.
-
-{% highlight bash %}
-$ chroot /root passwd
-$ chroot /root passwd myUserName
-{% endhighlight %}
-
-By running these commands, I successfully reset both the root password as well as the password for my account. From there, I was able to restart the machine and boot normally.
-
-*Is something here incorrect? Know of a better way to do it? Let me know [@bburwell](https://twitter.com/bburwell).*
diff --git a/_posts/2015-03-28-reset-forgotten-password-on-luks-encrypted-ubuntu.md b/_posts/2015-03-28-reset-forgotten-password-on-luks-encrypted-ubuntu.md
new file mode 100644
index 0000000..c1b678d
--- /dev/null
+++ b/_posts/2015-03-28-reset-forgotten-password-on-luks-encrypted-ubuntu.md
@@ -0,0 +1,51 @@
+---
+title: How to Reset a Lost Password on a LUKS-Encrypted Disk in Ubuntu Linux
+description: >
+ I recently needed to reset a lost password on an Ubuntu installation. But the
+ LUKS encryption on the disk gave me some challenges. Here's what I did.
+---
+
+Here's the situation I recently found myself in:
+
+- Ubuntu Linux 14.10
+- Unknown password for user account
+- Unknown (but set) root password (Ubuntu's philosophy is to use `sudo` for everything)
+- LUKS encrypted filesystem (known passphrase)
+- Physical access to the computer
+
+<!--more-->
+
+I needed to reset my account password. Normally, with physical access to a
+machine, all bets are off when it comes to security. I tried booting up the
+machine into [recovery mode](https://wiki.ubuntu.com/RecoveryMode) by holding
+down <kbd>shift</kbd> as soon as the BIOS had finished loading. But when I
+selected the "Drop to root shell" option, I was prompted to enter the unknown
+root password.
+
+My second approach was to boot into single user mode by editing the GRUB command
+script.
+
+![Ubuntu's GRUB menu](/assets/images/ubuntu-grub.png)
+
+By going down to the recovery mode option and hitting <kbd>e</kbd>, you can edit
+the GRUB commands. By adding `init=/bin/bash` at the end of the line
+beginning with `linux` that specifies the boot image, you can specify
+an initial shell to use. Then I hit <kbd>F10</kbd> to boot.
+
+After waiting for about 30 seconds or a minute, I saw a message that waiting for
+the root device (the locked disk) had timed out. I was then dumped into an
+[initramfs](https://wiki.ubuntu.com/Initramfs) shell. From there, I was able to
+unlock the disk by running `cryptsetup luksOpen /dev/sda3 sda3_crypt`.
+
+Next, I mounted the freshly-unlocked disk with `mount -o rw /dev/sda3 /root`,
+taking advantage of the pre-existing empty directory. From there, I used
+`chroot` to run `passwd` in the OS.
+
+```
+$ chroot /root passwd
+$ chroot /root passwd myUserName
+```
+
+By running these commands, I successfully reset both the root password as well
+as the password for my account. From there, I was able to restart the machine
+and boot normally.
diff --git a/_posts/2015-03-29-visualizing-congress-with-d3.markdown b/_posts/2015-03-29-visualizing-congress-with-d3.markdown
deleted file mode 100644
index 18e7bb8..0000000
--- a/_posts/2015-03-29-visualizing-congress-with-d3.markdown
+++ /dev/null
@@ -1,76 +0,0 @@
----
-title: Visualizing Congress with D3.js
-description: Learning D3.js with Congress visualizations.
-layout: post
-category: writing
-date: 2015-03-29 00:00:00
----
-
-<div>
- <style scoped>
- .d3container {
- width: 100%;
- margin-top: 2em;
- margin-bottom: 2em;
- }
- </style>
-</div>
-
-I've been wanting to learn [D3.js](http://d3js.org/) for a while now, so I decided to create some visualizations of the United States Congress, inspired by Neil deGrasse Tyson:
-
-<div class="text-center">
- <img alt="What profession do all of these senators and congressmen have?" src="/assets/images/vis_ndgt0.jpg">
- <img alt="Law, law, law, law, business man, law, law, law..." src="/assets/images/vis_ndgt1.jpg"><br>
- <img alt="Where are the scientists? Where are the engineers? Where's the rest of... life?" src="/assets/images/vis_ndgt2.jpg"><br>
-</div>
-
-<!--more-->
-
-It wasn't hard to find some [open-source Congress data](https://github.com/unitedstates/congress-legislators), and converting the [YAML](https://github.com/unitedstates/congress-legislators/blob/master/legislators-current.yaml) to [JSON](/assets/data/legislators-current.json) was [practically a one-liner in Ruby](https://gist.github.com/benburwell/20e76f70645c8003b088#file-yaml-to-json-rb). Armed with my trusty JSON data, I set off to learn the basics of D3.
-
-Conveniently, D3 packages some of the base functionality that we often turn to jQuery for, eliminating the need to include yet another library. Using CSS selectors to query the DOM, adding nodes and attributes, and fetching JSON data are just a few such functions.
-
-D3 also comes with some pretty neat built-in plotting functions. I wanted to make a bubble chart to show the gender and number of terms of each legislator. My first attempt looked something like this:
-
-<div class="d3container" id="d3gender_terms_v0"></div>
-
-I used green dots for legislators who identified as female and blue dots for legislators who identified as male. [The code for this](/assets/scripts/d3/gender_terms_v0.js) is very simple, and it doesn't produce a totally awesome result. What I wanted to do next was to bundle all of the circles together in a meaningful way.
-
-Fortunately, D3 has a layout feature that allows you to easily use some pre-built layouts such as `d3.layout.pack()`. The unfortunate part is that I found the documentation rather hard to use and the particular data structure required by D3 to use the `pack()` layout was hard to track down as someone very new to D3. It turns out that this layout is a type of [hierarchical layout](https://github.com/mbostock/d3/wiki/Hierarchy-Layout), which expects an object with an array of `children`, a `value`, `depth`, and `parent`, all of which are used depending on the particular type of layout used. In the case of the `pack()` layout, D3 computes an *x*-coordinate, a *y*-coordinate, and a radius based on the `value` of each datum.
-
-While sorting the data does not produce an optimal packing, it does help visualize the makeup of Congress. I wanted to put the legislators who had the largest number of terms in the center of the pack. I also used the `d3.scale.category10()` function to produce a color value for each gender automatically. The [resulting code](/assets/scripts/d3/gender_terms_v1.js) produces a very nice bubble chart:
-
-<div class="d3container" id="d3gender_terms_v1"></div>
-
-Let's take advantage of some other data that are available to us and see the proportions in which different religions are prevalent. The dataset we're using only has religion data on about a third of the current legislators, so we can start off by making a bar graph of the proportions within that subset:
-
-<div class="d3container" id="d3religion_v0"></div>
-
-As you might expect, [the code for the bar graph](/assets/scripts/d3/religion_v0.js) is fairly simple. One interesting thing that we can do here is to create a linear scale specifying the domain and range. Essentially, this gives us a way to compute the appropriate width of the bars as a function of the actual data value. The most complex part of this visualization is the `transform()` function, which prepares the raw data for use in D3. This function is present due to the added challenge I gave myself of only transforming the dataset client-side.
-
-<aside><p><em>n.b.</em> &mdash; While transforming the data client-side on load adds to the page rendering time, I wanted to see how many sorts of visualizations I could make using just one dataset. In a production environment, it would probably make sense to flatten and transform the data as necessary for each visualization server-side, though an analysis of download time vs. time spent transforming the data for different visualizations would be necessary due to browser caching.</p></aside>
-
-We can also make [a quick donut chart](/assets/scripts/d3/religion_v1.js) to show the subset of legislators that we examined in our bar graph.
-
-<div class="d3container" id="d3religion_v1"></div>
-
-As you may have noticed, there is some overlap in what religion people identify as. Should "Roman Catholic" really get a separate bar from "Christian"? Or "Catholic"? This seems like a great opportunity to use another hierarchical representation. However, our data source does not contain any hierarchical data about religions. So let's find something else to visualize!
-
-Since we have information on each legislator's terms, let's see what we get by making a [partition layout](https://github.com/mbostock/d3/wiki/Partition-Layout) of their party affiliation. Since there are currently over 500 legislators in the U.S. Congress, we'll take a random 10% sample so that things don't get too out of hand:
-
-<div class="d3container" id="d3party_affiliation_v0"></div>
-
-Since the random sample is being taken in the client, you should see a new chart if you refresh the page. You can check out [the source code](/assets/scripts/d3/party_affiliation_v0.js) for full details.
-
-<hr>
-
-After a brief exploration of D3, it is clearly an extremely powerful &mdash; if not completely intuitive &mdash; library for building rich data-driven documents. I'm excited to continue learning more about D3 and using it in my own projects. You should definitely take a look at [Mike Bostock's site](http://bost.ocks.org/mike/) for some much cooler applications of D3 from its creator. There's also a [gallery](https://github.com/mbostock/d3/wiki/Gallery) as well as [tons of other examples](http://bl.ocks.org/mbostock). And of course, you can check out the [D3 source code on GitHub](https://github.com/mbostock/d3).
-
-Thanks for reading! If you have any comments, let me know [@bburwell](https://twitter.com/bburwell).
-
-<script src="https://cdnjs.cloudflare.com/ajax/libs/d3/3.5.5/d3.min.js"></script>
-<script src="/assets/scripts/d3/gender_terms_v0.js"></script>
-<script src="/assets/scripts/d3/gender_terms_v1.js"></script>
-<script src="/assets/scripts/d3/religion_v0.js"></script>
-<script src="/assets/scripts/d3/religion_v1.js"></script>
-<script src="/assets/scripts/d3/party_affiliation_v0.js"></script>
diff --git a/_posts/2015-04-23-getting-login-to-work-ubuntu-15.04-nvidia.markdown b/_posts/2015-04-23-getting-login-to-work-ubuntu-15.04-nvidia.markdown
deleted file mode 100644
index ed0142b..0000000
--- a/_posts/2015-04-23-getting-login-to-work-ubuntu-15.04-nvidia.markdown
+++ /dev/null
@@ -1,25 +0,0 @@
----
-title: Getting Login to Work on Ubuntu 15.04 with NVIDIA Drivers
-description: When I upgraded to Ubuntu 15.04, logging in broke. Here's how I fixed it.
-layout: post
-date: 2015-04-23 00:00:00
----
-
-When I upgraded to Ubuntu 15.04, I was unable to log in. The machine started normally and I was presented with the login window. But when I entered my password, the screen went black for a few moments and then the login screen came back.
-
-<!--more-->
-
-Since I'm using an [NVIDIA GeForce GTX 750](http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-750), which Ubuntu's Nouveau drivers don't support, I previously needed to install the NVIDIA graphics drivers.
-
-By entering <kbd>Ctrl</kbd> + <kbd>Alt</kbd> + <kbd>F3</kbd>, I was able to drop to a shell. When I checked `/var/log/Xorg.0.log`, I found a message stating that the NVIDIA driver had failed to load the GLX module, despite earlier messages that it had been loaded. The message also recommended reinstalling the NVIDIA driver.
-
-In the same shell, I ran:
-
-{% highlight bash %}
-wget http://us.download.nvidia.com/XFree86/Linux-x86_64/349.16/NVIDIA-Linux-x86_64-349.16.run
-chmod u+x NVIDIA-Linux-x86_64-349.16.run
-sudo service lightdm stop
-sudo ./NVIDIA-Linux-x86_64-349.16.run
-{% endhighlight %}
-
-After that, restarting my computer cleared up the issue.
diff --git a/_posts/2015-04-23-getting-login-to-work-ubuntu-15.04-nvidia.md b/_posts/2015-04-23-getting-login-to-work-ubuntu-15.04-nvidia.md
new file mode 100644
index 0000000..fd8c0c7
--- /dev/null
+++ b/_posts/2015-04-23-getting-login-to-work-ubuntu-15.04-nvidia.md
@@ -0,0 +1,33 @@
+---
+title: Getting Login to Work on Ubuntu 15.04 with NVIDIA Drivers
+description: When I upgraded to Ubuntu 15.04, logging in broke. Here's how I fixed it.
+---
+
+When I upgraded to Ubuntu 15.04, I was unable to log in. The machine started
+normally and I was presented with the login window. But when I entered my
+password, the screen went black for a few moments and then the login screen came
+back.
+
+<!--more-->
+
+Since I'm using an [NVIDIA GeForce GTX
+750](http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-750), which
+Ubuntu's Nouveau drivers don't support, I previously needed to install the
+NVIDIA graphics drivers.
+
+By entering <kbd>Ctrl</kbd> + <kbd>Alt</kbd> + <kbd>F3</kbd>, I was able to drop
+to a shell. When I checked `/var/log/Xorg.0.log`, I found a message stating that
+the NVIDIA driver had failed to load the GLX module, despite earlier messages
+that it had been loaded. The message also recommended reinstalling the NVIDIA
+driver.
+
+In the same shell, I ran:
+
+```
+wget http://us.download.nvidia.com/XFree86/Linux-x86_64/349.16/NVIDIA-Linux-x86_64-349.16.run
+chmod u+x NVIDIA-Linux-x86_64-349.16.run
+sudo service lightdm stop
+sudo ./NVIDIA-Linux-x86_64-349.16.run
+```
+
+After that, restarting my computer cleared up the issue.
diff --git a/_posts/2015-06-01-facebook-now-sends-pgp-encrypted-email-notifications.markdown b/_posts/2015-06-01-facebook-now-sends-pgp-encrypted-email-notifications.markdown
deleted file mode 100644
index 7be8aae..0000000
--- a/_posts/2015-06-01-facebook-now-sends-pgp-encrypted-email-notifications.markdown
+++ /dev/null
@@ -1,25 +0,0 @@
----
-title: Facebook Now Sends PGP Encrypted Email Notifications
-description: Today, I noticed that Facebook now has a place for you to list your PGP public key.
-layout: post
-date: 2015-06-01 00:00:00
-image: https://www.benburwell.com/assets/images/facebook_gpg.png
----
-
-Today, I noticed that Facebook now has a place for you to list your PGP public key. If you go to your "About" page and open the "Contact and Basic Info" section, there is now a line for you to paste your key. In addition to allowing other people to easily access your public key, there's also a checkbox for Facebook to encrypt notification emails with the key.
-
-<!--more-->
-
-<p><a href="/assets/images/facebook_gpg.png"><img src="/assets/images/facebook_gpg.png" style="max-width:100%" alt="Facebook now gives the option to list your PGP public key"></a></p>
-
-The mouseover help text states:
-
-> If you check this box, you will receive an encrypted verification email to
-> make sure that you can decrypt notification emails that have been encrypted
-> with this public key. If you are able to decrypt the verification email and
-> click the provided link, Facebook will begin encrypting notification emails
-> that it sends to you with your public key.
-
-I tried it, and just as described, I got an encrypted email signed with PGP key `0xDEE958CF`. After decrypting the email and following the link, I was alerted that email notifications would now be encrypted. I haven't actually verified this, since I don't receive email notifications from Facebook to begin with.
-
-A quick Google search as well as a search in the Facebook Help Center turned up no results, so I'm not sure how recent this is, or perhaps it's being rolled out gradually.
diff --git a/_posts/2016-04-08-whitelisting-tor-on-cloudflare.markdown b/_posts/2016-04-08-whitelisting-tor-on-cloudflare.md
index 365eea7..2a22e78 100644
--- a/_posts/2016-04-08-whitelisting-tor-on-cloudflare.markdown
+++ b/_posts/2016-04-08-whitelisting-tor-on-cloudflare.md
@@ -1,11 +1,8 @@
---
title: Whitelisting Tor on CloudFlare
description: >
- CloudFlare poses an insignificant barrier to Tor users, but site operators can
+ CloudFlare poses a significant barrier to Tor users, but site operators can
ease their way by whitelisting Tor.
-layout: post
-date: 2016-04-08 00:00:00
-image: https://www.benburwell.com/assets/images/tor.png
---
On March 30th, 2016, CloudFlare posted [a blog entry entitled "The Trouble with