Subscribe to RSS Subscribe to Comments

freesoftwhere.org

Danger, Will Robinson! Trefoil history and alternate history

Danger and hazard symbols are a funny thing. They’re a deadly serious topic (or else they would not be necessary), but they seem to spark quite a bit of fanciful creativity. They inherently need to communicate dangerousness without the use of words (or else we would just write the words instead…), but in many cases they take context if not full-blown interpretation to understand.

Adapted from https://commons.wikimedia.org/wiki/File:Radiation_warning_symbol-US.svg Public domain

Hot hot hot!

The classic example is the “trefoil” radiation symbol, which has inspired scores of imitators. It is iconic to the point where it gets copied, but somehow most of the copies fail to convey their message of impending peril as effectively as the original.

 

Berkeley beamers

The trefoil as we know it today was invented, so we’re told, in 1946 by a group of brainstormers holed up at the University of California Berkeley’s Radiation Laboratory. The design brief was to communicate the presence of hazardous ionizing radiation. That’s radiation that packs enough energy to knock the electrons off of atoms that it hits, which in turn fouls up the chemical bonds that those atoms can form and, when those atoms are you, seriously ruins your day. Non-ionizing radiation is the harmless stuff like radio waves and flashlight beams that bounces off of solid objects, gets lost in the drapes, and generally does no one any harm (apart from your step-uncle in Idaho who picks up demonic transmissions whenever he knowingly drives past a cell tower and tells you on Facebook about the alleged “road crew” that was out pulling reels of heavy-duty cable on the edge of town where he knows there isn’t any electric service to need cable, so what were they really doing out there?).

Anyway, the Berkeley Radiation Lab doodled up several possibilities, but they settled on the trefoil design and, eventually, went with a magenta-on-yellow color combination. Red and white was too similar to the fire department, blue did not signify “danger” and it faded too easily, and so on. In 1948, Brookhaven National Laboratory also decided it was in need of an ionizing-radiation symbol, so J.H.B. Kuper wrote to Berkeley asking for details on that symbol. Meetings were held, samples were scrutinized, and what came out of it all was the symbol we hold near & dear to our hearts today.

Radiation prototype on green

Reject from a green-field deployment at Berkeley.

An interesting factoid about the trefoil symbol (as Nels Garden related it to Lloyd Stephens and Rosemary Barrett for their 1978 article A Brief History of a “20th Century Danger Sign”) is that the Berkeley brainstormers chose the design we’re familiar with because it suggested rays or radiation shooting out of the nucleus of an atom. Other possibilities were recorded by Stephens and Barrett, including a skull-and-crossbones, a mushroom cloud, something a little harder to discern from the sketches, and a combination skull-and-crossbones-and-trefoil.

Radiation trefoil discussion

For the last time, Hanford, draw with your whole arm not just the wrist.

A second interesting factlet is that the group seems to have rejected several alternate versions of the trefoil design that sound more complicated, especially “signs that incorporated straight or wavy arrows between, or inside, the propeller blades.” Examples are hard to come by, but here are two:

Radiation trefoil prototype

Radiation goes out; fear goes in.

Radiation trefoil prototypes

Wave–particle duality: solved.

Simplicity wins when the opponent is headed toward your internal organs at the speed of light, evidently.

 

One has to be civil

So we’ve had the radiation trefoil for 70 solid years now. About a dozen years after the trefoil’s adoption, we got its first spin-off. Much to our collective surprise, it turned out that friendly folks here in the US of A weren’t the only ones with ionizing radiation sitting around, and once someone lobs theirs in your direction, you’d better have some place suitable to duck and cover. So, in 1961, the Office of Civil Defense rolled out the “fallout shelter” symbol. It copied the three-armed skeleton of the radiation trefoil, but without the central “source” circle and with an enclosing exterior circle.

You don't have to mutate at home but you can't stay here.

You don’t have to mutate at home but you can’t stay here.

Bill Geerhart reports that the design of the symbol was commissioned to Blair, Inc of Fairfax, VA—and that the trefoil-derived design was added to a shortlist of options sent up for CD management to choose from primarily because it looked familiar, not because it was the favorite.

Credit for the design goes to Robert W. Blakeley of Blair, Inc. Unfortunately, it seems that the other proposed designs have been lost to Father Time, but Geerhart favors us with one description of an alternate design for the full fallout-shelter sign: “one of them…showed a family of three, holding hands, moving graphically across the center…” In a subsequent e-mail he expanded upon this description slightly: “[it] showed a family of three moving in depth perspective to a shelter, had a small trefoil, without the center dot, in shadow background.”  Think that sounds rough? Hey, you try describing a graphic design project on the phone sometime.

 

Extreme biology

Given a favorable mood and the right background music, one could easily forgive the Office of Civil Defense for its act of sparking the now rampant trefoil-derivatives market. After all, ionizing radiation and fallout shelters are but two sides of the same coin, particularly when it’s a 1961 coin.

Where things took an irrevocable turn toward toothpaste-out-of-the-tube territory, though, is with the biohazard symbol. In 2001, The New York Times reported that the symbol was invented by the fun-loving folks at Dow Chemical in 1966. The process involved a series of focus groups led by Dow’s Charles Baldwin, in which participants were shown a variety of symbol proposals over a few days. The creepy-looking symbol that won was the one that scored highest on the metrics of ‘being memorable’ and ‘not reminding you of something else.’

Biological hazard trefoil symbol

Look out: it’s got biology!

What’s notable, though, is that unlike the radiation trefoil, the biohazard trefoil has no symbolic meaning; the shape at the center and the arms jutting out do not represent anything, although they are somewhat suggestive of “something alive.” They’ve been variously described as insectoid mandibles, antennae, some sort of bacterial flagella, or simply something that’s spreading.

All of those images are encompassed in the “biohazard” category, which is itself another change. Unlike ionizing radiation, there is not a single, well-agreed-upon definition for “biological hazard.” Certainly it includes infectious agents like viruses, but it also includes medical waste and various toxic compounds that could cause sickness or disease. In truth, there is a lot of overlap between things that are deemed biological hazards and things that are deemed “poisons”—though, when you dig into it, there isn’t a universally accepted definition for what qualifies as a poison, either.

Sadly, records of what the other candidate symbols tested against Dow’s biohazard trefoil were are also hard to come by. But Harvard Medical School did reprint a copy of the Times story, which itself is now only accessible through the Internet Archive’s Wayback Machine, and that copy of the story included one picture showing three alternate designs:

Rejected biohazard symbols

In this tub are all my hopes and dreams and also some Ebola.

One factor that seems fairly clear by this point is that the trefoil arrangement had already become identified with the notion of “danger,” which helps explain why it was the model for this first non-radiation-related hazard symbol. Beyond that, it’s hard to say where the alternate candidates miss the mark. The one on the left is clearly an iteration on the same design eventually selected, but the other two, absent of years of context, appear strangely non-threatening.

Rejected biohazard symbol 1

Do not open: high risk of jesterification.

Rejected biohazard symbol 2

Yeah but if you stare long enough then anything looks like a beaker.

Rejected biohazard symbol 3

Doing the things a triangle can.

Deciding what these other candidate symbols suggest can be a fun party game, if you go to parties with infectious-disease researchers or whomever the Tom Hanks character from The DaVinci Code was based on.

One final note about the abstractness of the biohazard trefoil is that it bears a perplexing similarity to the Bordeaux area of France’s regional coat-of-arms. Although, depending on how you feel about wine, perhaps there’s no coincidence to explain away at all. Either way, it seems to be no conspiracy (at least for now).

Bordeaux symbol

Captain, these culture readings are off the charts.

 

Man vs chemical

So, for the benefit of those of you nodding off, with the fallout-shelter symbol we lost a little of the radiation trefoil’s direct connection to the source of the threat (beams of energy), and with the biohazard symbol we moved a bit further away still: the symbol has no graphical connection to the danger at all; it only looks scary and can ride the coattails of the original symbol’s established connection to “something bad.”

The next iteration (and, as far as I can tell, the most current) of the hazard trefoil is the “chemical weapons” symbol. Here again, the design is constructed on a trefoil frame, but using geometric shapes. This is the symbol you might find plastered on the sides of containers of nerve gas or Sarin, or perhaps really strong acid (not that kind, hippie). Or, at least, Wikipedia calls it the “chemical weapons” symbol and gives it an appropriate color as proof:

How will they know it's military if it's not green?

Well, we do know that the army buys a lot of green paint.

Turns out it was meant to be more general than that, although it was created by the military. The US Army Office of the Surgeon General runs (or ran) a training program call the Nuclear, Biological, and Chemical Casualty Training System (NBC CTS). Though offline today, the Wayback Machine has a copy of a page from the site explaining that the symbol was created to match the design of the existing radiation and biohazard trefoils.

“When NBC CTS was first created, it bothered us not only that the chemical symbols differed so greatly in design from the nuclear hazard and biological hazard symbols, but also that there was more than one standard in use. It was for this reason that we constructed our own chemical hazard symbol, as seen above. It has an atom-like look to it, which is appropriate for chemicals.”

Indeed, the most obvious connection the chemical hazard symbol has is to ball-and-stick models of atoms, which are often used to visualize chemical compounds. But it’s interesting that this connection sounds like a happy coincidence. So, too, we should note that the page does not describe the symbol as being reserved for chemical weapons, but for hazardous chemicals in general. The only other design note available is a tibit on the downloads page that calls out the biohazard symbol for being “just plain cool.” Hey, you get no argument here, doc.

Ask your doctor if chemicals are right for you.

Ask your doctor if chemicals are right for you.

It’s hard to say exactly when the design was created (at least it is for me, unwilling as I am to do more than search around online for part of a day), but the NBC CTS has been around since at least 2000, although there are not a lot of records predating that time period. So the symbol could be rather new. It also has not caught on to the same degree as its elder siblings—perhaps due to time, perhaps since there is already a wide variety of other symbols that the lab-coat crowd uses to mark dangerous chemical substances. And if we may speculate a touch, “chemical” is perhaps just a bit too broad for a single clear sign to cover all the possibilities. After all, cyanide, hydrochloric acid, and nitroglycerin are all dangerous chemicals, but what each does (and how you need to protect yourself from them) varies considerably.

The archived page does not show or describe any alternate design concepts, but it does mention what was in use before. “A few years ago, there had been three variations of the chemical hazard symbol in use. One was a picture of a death’s head, or skull and crossbones. The other was a beaker. The last was a pair of beakers with their necks crossed.” That latter two don’t sound like we’re missing out on much, but the first does indicate the acceptance problem: the skull-and-crossbones already represents “poison” to most people who have a skeleton or know what one is.

The radiation trefoil, on the other hand, was invented because researchers needed a new symbol to represent a new type of danger. It’s hard to say if the biohazard symbol meets that same criteria or not; there were certainly viruses prior to 1966, but perhaps our perception of them changed as they became something to study or, even, create in laboratory conditions. One thing is for sure, though: the radiation trefoil was so successful that its three-fold design was assumed to be the right starting place when NBC CTS started its own design process.

 

Caution: speculative fiction sighted

Most interesting of all, however, is the fact that this same assumption continues to this day. The trefoil’s iconic shape inspires people to develop their own danger signs, representing more recent hazards and even threats that are entirely hypothetical. In particular, the science-fiction industry (Big SciFi, as your step uncle calls them) has developed a penchant for spawning trefoil-like hazard symbols regularly, for every looming threat from antimatter to zombies. But that’s a subject for part two, next week.

In the meantime, if you do happen to know of additional historical or rejected trefoil hazard designs—or if you have any more information about the design process for the symbols shown—please do get in touch.  Until then, here’s a gallery of all of the symbol designs seen so far, partly to serve as a convenient article thumbnail, but mostly just to leave you with something to think about.

Trefoils past and present

So much danger; so many choices.

The OpenType in Open Source workshop at LGM 2015

This year at Libre Graphics Meeting, we held a workshop / discussion session about OpenType support in open-source graphics applications. I proposed the session, but really only to act as cheerleader.

OpenType features have been possible (and available) in fonts for well over a decade. But few if any applications make them easy to access and use. At the ATypI meeting in October 2014, type designers got so upset at how bad Adobe’s OpenType feature support is in things like Illustrator and InDesign that they actually started a petition in protest.  That raised a red flag with me, since open-source applications aren’t any better in this regard. So I proposed on the CREATE list that we get together and talk about it.

Turnout was excellent—we didn’t do a full headcount, but representatives from every large free-software graphics tool at LGM were there (and several of the smaller ones, too).  The meeting room was packed. That’s good news, because it indicates a lot of interest in getting proper OpenType support working and in coming up with implementation approaches that will feel consistent from app to app.

To be more specific (for anyone with the misfortune to stumble onto this post from outside), we were there to look at how application projects could add support for optional advanced OpenType features that the user should be allowed to switch on and off as desired.  That turns out to be a bit complicated.

A little background

OpenType features come in two general forms: look-up rules that change the positioning of one or more glyphs (which you’ll see called GPOS lookups), and rules that substitute glyphs for other glyphs (which would be GSUB lookups).  There is a big, public list of “tag” names that font developers can use to designate their various GSUB and GPOS rule sets with some semantic meaning.

For instance, replacing the “f” and “i” glyphs with the “fi” ligature glyph is a GSUB rule that is usually listed under the “standard ligatures” tag, ‘liga’.  Semantically, liga is supposed to be for ligature substitutions that are active by default.  In contrast, the “discretionary ligatures” tag ‘dlig’ is supposed to designate ligatures that are not required, but that the user might want to enable for decorative purposes.  A lot of historical fonts have a “Qu” ligature that would fall under this category, with the tail of the Q sweeping out way under the u.  Similarly, there are GPOS rule sets like “case-sensitive forms” or ‘case’ that are supposed to be always on: ‘case’ is meant to adjust the vertical position of punctuation like hyphens and parentheses so that they line up correctly for ALL CAPITAL TEXT instead of for lowercase.  Then there are GPOS rules that are optional, like “tabular numerals” or ‘tnum’—which shifts all numeric digits to make sure they line up in columns.

[Side note: there’s also a large set of these features that are defined specifically to enable shaping of complex scripts (like Arabic and Indic scripts), where the context of the letters and words requires a lot of flexibility for shape and placement when compared with scripts like European alphabets or CJK text.  Consensus was clear that these features are meant to be handled by the shaping engine, not the application, and the shaping engine is already doing a good job here.]

First tricky bit is, though, that what’s “supposed” to always be on and what’s supposed to be left up to the user as an option is kind of arbitrary.  The creators of OpenType don’t even agree.  Adobe has one list with such advice; Microsoft has another, and Adam Twardoch of FontLab has yet another.

Discussion and analysis

So we spent some time discussing the various types of OpenType features—at least those on the “official” lists of “registered” tags linked to just above. The question came up how often that list of registered feature tags gets expanded; the answer is evidently “not often.” Then we talked a lot about the different kinds of features and how they may be used.  Some of them a user might apply only to a few selected characters (even one); others would be desirable for whole blocks of text or documents.

But it’s not that simple.  An “default on” feature cannot be trusted to work flawlessly in every font and every situation, so the user needs some way to switch it off. And a contextual feature like “smart fractions” (‘frac’) might match some text pattern but actually be semantically different in the document. My example was when a user writes “I’m working 24/7″—that numeric sequence looks like a fraction, but in reality it isn’t one. [Note: part of the complication has to do with the fact that there are two slash-like Unicode characters, the ‘slash’ itself (U+002F, “SOLIDUS”) and the ‘fraction bar’ (U+2044). Usually only the ‘solidus’ slash is on the keyboard.]

We also looked at several UI proposals (1234567) related to OpenType features that had been previously published by others, just to see what the landscape looked like.  Here (as well as in all of the discussion about when and how a user might want to access a particular feature), we got a lot of good feedback from interaction designer Peter Sikking.  For starters, Peter pointed out that many of the UI suggestions’ complaints are more reactionary about what isn’t working right than they are carefully-considered interface rules, so they may be interesting, but are not work to copy.

Peter also pointed out that the application projects represented have very different needs: an interface that works for Scribus—which is text-centric and offers lots of typography features—would not work for GIMP, where the text tool is a less important component (and one that has far less screen real estate available to its user interface). The best we can hope to do, he said, is come up with some “best practices” that apply to different scenarios, and let each application project implement them on their own as best they can.

Someone (and I think it was Peter, but I’m not 100% sure this many days later; please let me know if it was you) then pointed out that a few of the features amount to typeface-wide choices that are often implemented (at present) in separate fonts.  The prime example is small caps, which is frequently available as an OpenType feature (‘smcp’) but even more frequently is pulled out into a separate font, e.g. “Foo Serif SC”.  Though less used, there is an italics feature tag, too (‘ital’).

Making matters worse, many applications also allow “fake” small caps and italics. The user, however, will likely not care whether small caps or italics are implemented as an OpenType feature or in separate font files; they just want to apply them as a text style. That both presents a UI issue and impacts implementations.

We also briefly discussed whether supporting OpenType features in text markup would affect file formats.  Generally speaking, everyone seemed to think this would not be a difficult problem. There was a general desire to implement something that follows the approach used by CSS.  It seems to be working well for users (and developers), so that looks good.

Among the other points raised:

  • Behdad Esfahbod pointed out that CSS feature support is frequently accessed with a simple slider option that turns features on and off without significant headaches or dependency problems. For example, contextual ligatures, historical ligatures, and discretionary ligatures are all just “ligatures.”  The users don’t care (nor need to know) which feature provides the ligature they want. Similarly, its irrelevant to users that the ‘frac’ feature has a hidden dependency on separate numerator and denominator features.
  • Some of the features, like stylistic sets and character variants, come not just with a set of GPOS/GSUB rules, but also a human-friendly name that is encoded into the font.  For example, a font that includes ornamental caps in a stylistic set might name that set “Ornaments”.  This name would be a string in the uiLabelNameId field within the font file; so the application will need a way to access that and expose it to the user.
  • There should probably be some way to specify an “on by default” set, since it seems to be expected, but also a way for the user to switch it off.
  • There should be controls for the common (and well-defined, publicly “registered”) features, but there should also be a fallback mechanism that allows the user or application to access any feature via the feature’s four-letter tag.

Where to now?

Looking forward, we settled on a few “next action” items. For starters, we are going to try and coordinate our future discussions through the CREATE mailing list, which was invented to be a home for just this sort of collaboration.

Regarding the UI and UX questions, Peter agreed to work on developing what will eventually form the “best practices” and related recommendations for different applications.  The first step, however, is to spend some time talking with typographers and other graphic-designer-like users (who care about OpenType feature support) to study their processes and expectations.  This sort of process is what Peter does professionally; he most recently has been undertaking a similar systematic approach to interaction development with the Metapolator project (which he gave a talk on at LGM).  I mention this to explain that there are several steps between getting started and actually seeing prototypes, much less full-blown recommendations.

Regarding the lower-level plumbing layer: Fontconfig already catches the presence of OpenType feature tables when it indexes a font.  To get access to such a feature, though, the shaping engine (i.e., the software library that takes Unicode text characters, looks them up in the active font, and returns the right glyphs) also needs a way to report the presence of OpenType features, and a way for applications to request that the feature be turned on or turned off.  HarfBuzz is the shaping engine used by almost all free-software tools, and Behdad agreed to take on adding the necessary functions and API calls.

Moving one level up, some applications use HarfBuzz directly, but a lot of applications (including GIMP and Inkscape) use an intermediate text-layout library called Pango.  So Pango will also need hooks for OpenType features.  Behdad indicated that he is on top of this feature request as well.

Application projects, at the moment, do not have a lot that they need to do.  However, since the eventual ‘best practices’ are going to require using HarfBuzz, any application project that has been considering porting its text handling to HarfBuzz would save a lot of trouble later by getting started on that project now.  Earlier in the week, we held a HarfBuzz documentation sprint to develop a “porting manual” so to speak.  It isn’t quite finished yet, but the core example is there and will hopefully prove useful.

The exception to the above is that FontForge may need some work to support access to all of the OpenType features that may be exposed to the applications. The eventual plan was that FontForge (or other font editors) ought to provide a way to test features that somewhat resembles how feature usage is implemented in applications, but getting there may require some groundwork in advance.  The same may also be true for apps like GNOME Characters or the KDE and GNOME font managers, but I don’t think those developers were on hand at LGM.

Similarly, the thinking was that Fontconfig may also require some tweaking in order to allow testing of OpenType features.  During the smallcaps discussion mentioned above, Behdad noted that Fontconfig already lets the user define, in essence, “virtual fonts” that are simply fonts.conf references to existing fonts but with different OpenType features switched on or off.  A quick test revealed that this feature works to a degree, but has some bugs that need attention.  Here again, though, Behdad said he’s happy to take them on.

There were also open questions about real-world font implementations of several features. Google Web Fonts and Open Font Library, unfortunately, don’t index which fonts have these features. I agreed to do some research here.

We may also need to gather some good test cases: fonts with a variety of features implemented, and perhaps fonts that we will add features to (e.g., ‘ital’, which seems pretty rare).  If you’d like to help me that, get in touch, of course.

As for a web presence, I have tentatively set up a GitHub organization to use—at this point, primarily for the wiki and progress-tracking functionality.  You can find it at https://github.com/opensource-opentype … you may need to request membership if you want to contribute, although I’m new to “organizations” so bear with me if I have the details a bit off.  We’ll see.

Onward and openward!

For everyone else: if you want to keep up with the discussion, you can follow (or join) the CREATE mailing list. You can also take a look at the Etherpad notes from the session, although I cannot guarantee that they’re free of typos.  If you find any … someone else made those.

More will surely come. If you work on open fonts—or if you use or develop free software—I hope you’ll stayed tuned or even get involved.

What’s New in Open Fonts: № 001

Greetings, innocent reader!  I decided a few moons ago to see if would be valuable to periodically write up a “what’s new in open fonts” column, to cover small developments and/or incremental progress in the realm of open/libre typefaces and in free software for type design / typeography / text stuff.  When there are big stories, those tend to get covered, but in between those many of the smaller or less exotic improvements can get lots in the S/N of regular Internet Life.  I don’t know if it will prove valuable or not, but we can at least see.

In any case, as often happens, life gets in its own way, and here we are close to the end of 2014. That is a good time to look back, though, so that’s what I’ll do.  For the sake of space, however, we’ll break things up just a bit.

This first installment is going to cover news that happened in the time period between Libre Graphics Meeting (LGM) 2014 this past April and—roughly speaking—TypeCon 2014.  I already wrote up a rough report of recent developments immediately after LGM; it ran at the free-software news site LWN.net (where I work).  You can read it here: https://lwn.net/Articles/593614/ … and you can, in a sense, consider that “issue № 000.”

So with that bit o’ accounting out of the way, let’s begin.

Recent Releases

Five “big” open font releases have landed recently (at least five that I know of; if I’ve missed any, let me know). “Big” is, of course, a relative adjective; what’s listed below essentially accounts for fonts that garnered widespread attention because of where they come from or where they are used.

First is Fira, which was designed by Eric Spiekermann for the Firefox OS mobile operating system. Firefox OS, of course, comes from Mozilla, and is a free software platform where everything runs on HTML5, CSS, and JavaScript.  Fira saw a 3.1 release in May; since the early work in 2013 (when it was called “Feura” and consisted only of a sans) there has been a monospaced companion added to the family, plus expansion to considerably more weights (in upright and italics).  As of the 3.111 version, there are seventeen weights—though the heaviest (“Ultra”) and the lightest five (“Two”,”Four”,”Six”,”Eight”, and “Hair”) are designated as experimental.  Also noteworthy is that the build notes and a tech report are available to the public. Fira now seems to be developed by Carrois Type Design, although I haven’t found a source documenting what exactly the relationship or the plan for the future of the font family is.  If you know, do tell.

Source Serif, from Adobe, was also released in May. Source Serif is the latest edition to Adobe’s widely used “Source” family.  As you probably recall, Source Sans debuted in 2012, Source Code (a monospaced typeface) followed in 2013.  Source Serif was designed by Adobe’s Frank Grießhammer (who otherwise seems to be renowned for his overwhelming devotion to the Unicode box-drawing characters, which, in a sense, also makes him a ‘box-drawing character’ when you think about it); it is based on ideas from the work of Pierre Simon Fournier.  It is a transitional face, but despite having a distinct historical lineage from Source Sans and Source Code, the team has done a lot of work to harmonize the design within the larger family (or “superfamily” if you’re one of those weird taxonomist nerds).

In July, Google unveiled its collaboration with Adobe on Noto CJK, an addition to its Noto family that covers the full Chinese, Japanese, and Korean character sets.  If there’s any lingering doubt about the size of such typeface, Adobe’s blog post on the release points out that the OTF files contain 65,535 glyphs—which is the maximum possible in OpenType.  Whether that amounts to a major problem needing immediate attention in OpenType is a popular discussion point.  Nevertheless, Noto CJK (like Noto) is available under the Apache 2.0 license.  Noto is a derivative of the Droid font family (of which there are several) designed to cover as many of the world’s languages as it can; I have not been able to track down more precise info on the designers and developers working on it.  As is always the chorus in this little dog-n-pony show: if you know, please tell me….

Speaking of Droid, Google’s shiny new replacement for Droid (or the Lance Henriksen to its Ian Holm, if you will…) is Roboto, which also received a major update in July.  The update was again the work of Christian Robertson; the redesign was done in concert with the latest Android release.  Most of the changes, according to the announcement, are to rhythm and spacing, although there are a few distinct changes to common glyphs, such as the legs on R and K and changing the dots (on i and j, but also on punctuation) from rectangular to round.

Last but not certainly not least, GNU Unifont released its latest update, version 7.0.03, in July.  The update covers every printable code point in Unicode 7.0, Plane 0.  If that name doesn’t ring a bell, GNU Unifont is a fallback font; it is used (for example) to display the generic titlebar symbol for all glyphs in the FontForge UI.

Naturally, there have been plenty of open font releases other than these.  Google Fonts announces new releases on its Twitter feed; by my count there were seven: Khand, Rajdhani, Teko, Kalam, Karma, Hind, and El Mukta.  Open Font Library featured many more releases—too many, in fact, to list individually in any practical sense.  But you can watch the OFLB Twitter account as well, although the RSS feed is a better alternative for compatibility reasons.

Software Development:

But new font families were not the only releases of note.  One of the easy-to-overlook releases this year was that of Adobe’s Font Development Kit for OpenType (AFDKO), which saw its first Linux release in the spring.  AFDKO is a collection of utilities for building, testing, and QAing (it’s a word; trust me) OpenType fonts.  When this Linux release happened, users still had to agree to Adobe’s non-FOSS license agreement in order to use it, but it was a big step anyway.  For the first time, it became possible to use many of these tools on Linux, both for one’s own fonts as well as to build Adobe’s own open font releases. It’s not too useful to have an open source license on a font if you can’t actually build it, after all. We’ll see what else happened with AFDKO in the second installment of this 2014 recap….

A totally unrelated release that caught my attention during this timeframe (and should catch yours as well) was version 0.2 of Raphaël Bastide’s ofont.  Not to be confused with sfont, which is Daniele Capo’s library for doing weird tricks with UFO fonts in the DrRacket IDE. Ofont is a simple web framework for deploying a font web site. You can use it to publish your own open fonts in an easy-to-scan-and-sample manner, or to build a microfoundry site.  Most importantly, when Bastide says it’s simple, he means it: this is a configure-it-in-plain-text-and-you’re-basically-done system, not some heavyweight monstrosity like WordPress or MediaWiki.  The best example of it in action is Bastide’s own font site, usemodify.com.

Arguably the biggest software story in the open font space this year, however, is Metapolator. Metapolator is a parametric font-family design tool that builds on the underlying precepts of Donald Knuth’s METAFONT. The idea is that the type designer can manipulate the parameters that describe an entire font—stroke widths, slant, x-heights and cap heights, contrast, weight, and so on.  Starting with a single font, the designer can extend it into a consistent font family, rather than having to rebuild every family member from scratch.

It’s a powerful and appealing concept, but it is also one fraught with design challenges.  Whole-font parameters are not easily visualized like actual Bézier curves in a glyph are, and making them easy to work with is a pretty new idea.

To make sense of the problem space and work towards a useful-and-usable interface, the project has been collaborating with interaction designer/developer Peter Sikking of Man+Machine Works. Sikking is long-time member of the free-software graphics community, and is perhaps best known for his interaction architecture work with the Krita and GIMP teams.  Both of those projects have reaped huge benefits from their respective collaborations; Krita virtually reinvented itself as a first-class natural-media painting application, and GIMP has brought sense and flexibility to a number of its tools over the years with Sikking’s designs (he most recently previewed some work reinventing the text tool, which will be interesting to watch).  So the outlook for Metapolator evolving a good UI/UX for its unusual design task is good.

But the process is not a quick one.  I talked to Sikking about the Metapolator work via video chat at the end of the summer.  Metapolator developer Simon Egli was originally going to join us, but wasn’t able to make it.  At the time, Sikking had completed working out the product vision with the Metapolator team (i.e., refining the purpose and goals for the application) and had recently worked with a number of type designers to observe their existing workflow for the tasks Metapolator is intended to address, and to get feedback from them about Metapolator interface issues.  He was still in the process of sifting through the results of those conversations, after which he would get to work mapping out how the designers would want to use Metapolator and how that lines up with the development team’s viewpoint and the actual codebase.  The plan was to have the designer vision distilled out by September, then a plan for working it into the UI the following month.

The nice thing about my procrastination on this whole endeavor is that that time period has now passed, and you can take a look at the results.  There is a thorough write-up of one face-to-face meeting in late July, an exploration of possible concepts for how multiple parameters (≥ 2 in particular) between master fonts could be presented, and (perhaps more importantly) Sikking has written a design overview that documents the overall structure for how users (type designers, specifically) would interact with Metapolator.  If you read through it—which you should—what you’ll see is how the user’s process of working on a font family with Metapolator breaks up into separate stages of activity: exploring the parameters of interest (weight, slant, style, etc etc), actually editing a font that is “metapolated” between multiple original masters until it passes muster, turning the metapolated intermediate into an actual, real font instance, etc.

There is also a lot of detail in Sikking’s writing that relates to the specifics of the eventual UI: ensuring that tools, menus, and panels fit onto appropriately-sized screen dimensions and so on. That may be less interesting to the type designer than the how-to-use-the-application questions, but it’s certainly good to consider all of those practical questions from day 1, rather than letting them slide to day 0 (note: in this case, “day 0” means whenever the resulting application is launched. “Day 1” on the other hand, means the much earlier starting-point day for the whole process. It’s a mixed metaphor. Deal with it. Maybe some enterprising mathematician would like to explore mapping the production calendar into the reverse-unit-interval [1,0] to see how that affects software development; I don’t plan to tackle it).

What comes next is the implementation phase.  More on that later, perhaps, since much of the recent work on it took place after the arbitrary pre-TypeCon deadline for this write-up.  The best place to follow its progress is the Metapolator Google Plus page, where the team is posting frequent updates.

Other News:

Finally, there was one other significant development in the open font community between LGM and TypeCon, and one that is particularly not fun for those involved.  Designer extraordinaire Vernon Adams was in a serious road accident in late May.  You may know Vernon from the Oxygen font family that has been adopted as the UI font for the KDE desktop environment, or from any of his dozens of other open fonts (which you can read about at his site, newtypography.co.uk).  I first got to know him online, as he routinely was able to dig up scans of old ATF specimen books that bordered on being higher resolution than the real-world itself, which was enormously helpful.  A bit later, I spent a week cooped up in a weird Google office building with Vernon, Eben Sorkin, Jason Pagura, Ben Martin, and Molly Sharp, co-authoring the book Start Designing With FontForge—as part of Google’s GSoC Documentation Camp.  It was actually a one-week booksprint guided by FLOSSManuals’s Adam Hyde, and it was a great experience all around (even when the espresso machine was misbehaving).

In any case, to return to the story at hand, Vernon’s accident was, as alluded, a bad one, in which he was banged up quite a bit. In fact, he was in an induced coma for quite a while, since it evidently can be very touch-and-go (particularly in the early days) where head injuries are concerned.

The good news—and it doesn’t get much better—is that, after all of that time and torment, Vernon is on the mend. Out of comas and casts, and in recovery.  That means a lot of the physical-therapy stuff that it takes to recover from a serious injury, though, which isn’t fast.  But he’s also close by to where his family lives, for which everyone’s grateful as well.

I don’t feel like I ought to dwell too much on Vernon’s recovery process, since that should be his family’s purview.  So I’ll just say that it’s great to see that he’s making progress, and I’m looking forward to the next time our paths cross in person. And I’m already thinking up sarcastic comments to make for whenever that pathcrossing takes place (I suspect that Vernon will find all the public attention pretty embarrassing, so we’ll go from there…).

If you want to stay more on top of Vernon’s story, his wife Allison is blogging about it all at sansoxygen.com. Again, I’m taking a cue from the family that it’s alright to point to the site (since it’s public), but as always, this is kind of personal stuff, so I hope we’re not intruding too much on Vernon’s privacy by mentioning it.

The end (part 001)

That wraps up this edition.  As promised, I will be back to discuss TypeCon to the end of 2014 in a follow-up post. Seeing how long this one is, I hope to compress things a bit more for the next installment, but if I’ve left something out, please drop me a line. If there’s still an excess of information for volume 002, I’ll just try and use smaller words.

What exactly is the MeeGo font?

Spent an interesting week at MeeGo Conf in San Francisco this week.  Overall, a very impressive project that’s doing something no other embedded OS is even attempting: building an open source, cross-platform OS for devices (netbooks, phones, tablets, cars, TVs & set-tops, etc., etc.).  Why is that important?  Cause if you think “app stores” are going to stay on phones and phones alone, you’re woefully behind-the-times.  And all MeeGo products are guaranteed to be compliant, so the same apps will run on all of them.  Even Google, in spite of the fact that Android is ostensibly open source, is trying to push three separate OSes for its device strategy: Android, ChromeOS, GoogleTV.  Hope you like writing the same game/music player/browser three times, developers!  And the fact that MeeGo just happens to be compatible with desktop Linux distributions — just gravy.

On the other hand, there are some unfortunate “black boxes” in the larger MeeGo project, presumably relics of upstream corporate bootstrapping.  One of those is branding.  At more than one session, I heard community members beg and plead for somebody to drop the preschooler-like cartoon characters.  That’d be wise.

More directly, however, we have a problem with the logotype.  The MeeGo wiki details the logo itself:

… and gives typography guidelines for the “MeeGo font,” which it describes as DIN, linking to the Wikipedia entry on the family.  It also shows a specimen, in three weights:

Pretty clear, right? Well, not really. You see, whatever font they actually chose, it’s at the very least a proprietary remake of DIN.  You can verify that by looking at the two open font implementations of DIN, Paulo Silva’s Open DIN Schriften Engshrift and Open Source Publishing’s OSP DIN. Here’s a side-by-side sample:

As you can see, neither is even close. Starkly different proportions and weights.  Neither has the same non-alphabetic glyphs (though I have no idea where any of them come from).  And that includes the text sample; re-reading the MeeGo wiki page, it could be interpreted to say that the MeeGo logotype is not in DIN at all, but rather is an original design. But regardless of whether that is the intent, the vague “use DIN” instructions can’t be followed, because whatever font they’re using, it’s not available in open source form.  Moreover, since both of the open DIN revivals are based on scanning the original paper designs, it’s clear that they better represent the original typeface — the MeeGo design team may have bought a nice font, but you can hardly call it DIN.  It’s some sort of derivative.  And they won’t say which.

So what now? Adopt an open source DIN for MeeGo? Specify which proprietary DIN-derivative is in use, then wait for a font designer to produce a MeeGo-compatible variant of one of the open versions? Ditch it all together, and pick something with a little more character?

The latter option might be worth considering, since even if you ignore the fact that DIN is a blasé street-sign face that makes you sad just to look at, reading through OSP’s blog on the subject reveals that the widely-repeated mantra that the original DIN was “put into the public domain” is less-than-documented and less-than-clear.  So that’s at least two strikes, maybe three, depending on how highly you value your local streetsign.  But who knows; maybe there is a third open source DIN revival out there that I simply haven’t located yet.  Any hints?

If it quacks like a canard

Canonical’s Jono Bacon suggested on Identi.ca yesterday that Linux users should head over to the Adobe Web site and vote for the software behemoth to bring Photoshop to Linux.  It’s not the first time that someone has asked for this, but what’s irritating is the supporting logic, including, notably, the assertion that bringing Photoshop to Linux will bring new users to Linux: specifically, people who would like to switch OSes but  who are “mandated” to use Photoshop at work.

This is a straight-up Internet urban legend.  For starters, it’s flat out untrue that there are designers or photographers in *any* significant numbers who are required by “corporate policy” to use Photoshop.  Design firms don’t work that way.  Sure, there may be some person somewhere who has an office-wide rule to that effect — it’s a huge world — but it’s nonsense to suggest that it’s anything close to a meaningful blip in the stats.  But even if there was such a person, are any of us supposed to believe that they are not allowed to install GIMP on their computers — but that they will erase OS X or Windows and install Linux instead, in order to use Photoshop-on-Linux?  Are we supposed to believe that Management will allow that?

This chestnut is appealing, because it creates an appealingly noble protagonist: the strident designer who wants to use Linux, but isn’t allowed to, because he’s being held back by The Man.  How can we not want to help that prisoner of conscience?  But it’s an illusion: GIMP, like OpenOffice and Firefox, is available for Windows and OS X.  The prisoner has a path to freedom, and if he’s not taking it today, it’s not because Enemies of Freedom stand in the way, it’s because either the free apps are unknown to him or he’s looked and prefers what he uses now.  The crux is this: whatever barrier-to-usage exists that prevents a budding free-software user from installing and using GIMP on a non-free OS, that barrier is orders of magnitude smaller than the cost of writing-over the existing OS and installing a new one so that the user can use the hypothetical Photoshop-on-Linux.  The path to conversion is Free App on Existing OS, then Free OS altogether.  It is not Proprietary App on Free OS, then Freedom altogether.  The only people capable of thinking in reverse like that are operating system vendors.

I get why nobody likes that solution; it’s harder on the open source community.  It means we have to do hard, thankless work on components like GTK+-on-OSX, on installers and focus and different keybindings, on single-button pointing devices and application resources in screwy Apple Places, and jump through all kinds of other hoops that don’t really seem to earn us many more users. And it seems like an ethical compromise to port free software to a proprietary OS (though for some reason, it’s not to do the reverse…?)  It’s much easier to say “Hey, Adobe, you do all the work to port Photoshop to Linux, we’ll wait right over here.”

Honestly, any designer who wants to try using Photoshop on Linux right now, can.  The pricetag of a CrossOver license is way, way less than a new OSX box or a new Windows 7 license.  So why don’t these designers try that whenever they upgrade their PC hardware?  Partly it’s cause CrossOver ain’t perfect.  But the big reason is simply inertia, like every other PC user has.  Couple that with the fact that an office-ful of designers probably buys bundled licenses for  its Adobe products, and the fact that big firms have The IT Guys do all that installing stuff, and you have a situation where nobody’s going to change operating systems only to use the same apps they can already use today.

Every designer I know has a Dock full of apps; little ones, big ones, expensive ones, cheapo ones.  Flexible ones and single-purpose ones.  Nobody does design work 40 hours a week in a single application.  So if we want to bring designers into the fold of open source and free software, we have to start by making the free apps more appealing to the designer currently running other stuff on a proprietary OS.  Easier to download, easier to install, better integrated with the existing OS conventions.  We have to pre-load things like PSPI with GIMP, include more high-end plugins; we have to promote (and yeah, enhance) GIMP’s PSD import capabilities.  GIMP can already export to PSD, something I suspect Bacon isn’t aware of due to his corporate policy comment.  But of course Adobe changes and extends the format periodically, since it’s their ball.

The upshot is that designers care about results, and they’ll use any tool they can get their hands on if it can do cool stuff.  If anything, designers are less resistant to trying new applications than generic-office-workers or middle-managers. The company may insist on saving work in a file format like PSD, particularly when working in a team situation, but that’s an interoperability issue.  In all of the years I spent being a photographer and designer, and working with both, the only time I ever heard a company dictate a software choice, it was for a DAM that they used to keep in sync with remote clients and contractors.  And yep, it was a proprietary one: Extensis.  You know what — that’s another area where free software needs to do some work.  But designers who want to use Linux but can’t because of the lack of Adobe CS?  Come on.

Corporate buying policies are a big deal, and a big hurdle, but not here — they affect offices that upgrade their desktops en masse and buy suites of licenses, and (in my estimation, far more importantly) they affect schools and universities, who negotiate for software licenses in bulk, and have IT or “Academic Computing” offices that manage multiple campus-wide labs, usually remotely, rather than the teachers who actually spend their time in those labs with the students.  They affect governments, which is probably an even bigger obstacle because of all the rules and legal requirements that restrict their buying practices.  Open source needs to make in these areas.  Porting proprietary software to Linux and swapping out the OS isn’t going to do it.

Let’s put “There are people dying to use Linux, but can’t because they have to use Photoshop” to rest — you know,  so we can give air-time back to the other oft-repeated urban legend about GIMP adoption: that no “professional” users will touch it because of its “unprofessional” name.  Cause guess what: that’s flat out untrue, too.  But one canard at a time.

Next Page »

Based on FluidityTheme Redesigned by Kaushal Sheth