Martin חיים Berlove
creator, thinker, polymath
Summary of Experience
Selected Projects / Achievements / Activities
Calm Speech Project[In Progress]
Goal: Achieve a better society through better communication.
A short guide to help people learn new tools and frameworks with less hassle
Received Mongo DB developer association
Customized Vi[m] themes
Much to my surprise, I've become something of a vim adherent, enough to want my own themes to suit my preferences.
Simple Trello tasklist integration using PHP
A small task list app made using Ruby's Hanami framework
Decided to kill two birds with one stone and make a productivity app for myself while learning a newer framework.
A better way to think about the money you spend
Conceptual game made for Github Game Off 2016
Extensible interface for demonstrating graph theory fundamentals.
Pong clone in JQM that supports touch.
Video-based classes on various topics in computing
A variety of raytraces for fun and education
Learning German language, spoken and written
Through the use of Duolingo, textbooks, online tutoring, and immersion, I hope to bring my skills up to a level for day to day communication with native speakers and to read technical papers written in German.
Explorations in music creation
Recent Writing (@MartinBerlove)
“Scrappy Path”
A “scrappy path” is some user’s “happy path.”

In the world of software, you often hear about the “happy path” of a product. A “happy path” is the optimal use case, the ideal journey a user takes to accomplish a goal in a piece of software, from start finish. In such a conception, we expect that the user acts intelligently, provides reasonable input to forms and other controls, performs tasks in a sensible and predictable order, and does not stress the system.

Builders of software use this “happy path” as a marker of how the system should expect to be handled.

In the real world, we know such an idyllic use case won’t plausibly cover all or even most scenarios. And it then seems reasonable to ask this: If we know the “happy path” won’t suffice, why do we so commonly explore, support, and even depend on it?

Getting to the Happy Path

A quality software system endures many levels of testing before being released to the user. In a proper release flow, even the least plausible cases have been considered, tested against, and either disallowed from the user or elegantly avoided: the system is robust, and the builders are confident in its ability to handle whatever a user may throw at it.

In practical reality, we often lack the time and resources to handle all edge cases (there are so many!) or to ensure the soundness of each component of the software and how those components relate (the connections are practically infinite!) or to plan against every improper use of the software (users are very creative!).

So when it comes down to the wire and a project deadline is looming, we look at the software, at all its moving parts, and we ask where the line should be drawn. We consider how best to trade off between what ought to be done and what may realistically be accomplished. You can’t ship a broken product, but you won’t gain customers if you never ship the product in the first place.

Thus the “happy path.” It is the core use case, the key way we expect a feature or tool to be used. It is this flow, over any other, that must be supported, prove strong, stable, flawless (or appear so).

Error: You’re Too Clever

This “happy path” method of balancing disparate needs doesn’t always bear out well for an individual user. When someone purchases a piece of software, that person will naturally leverage their new tool however best suits the needs of the moment. But when software is built primarily around a “happy path,” it’s all but guaranteed to let down some user, someone trying to bend the tool to work in a way that wasn’t originally thought of.

When you, as a consumer of a software product, try to do something in a novel and different way, thinking you’ve found a smarter method to accomplish a task, and then the software breaks, the “happy path” tradeoff is probably at fault. Congratulations, you’ve gone off the “happy path” and have encountered your own unique “scrappy path” — one of many edge cases that are difficult to predict and time-consuming to plan against.

You’re not wrong to get upset that the software isn’t working; if an interface allows you to perform some sequence of actions, it is essentially making a commitment that such actions will do something (besides crash the product). And I’d be willing to bet there’s a UX engineer sitting somewhere at the company that produced that software who would agree vehemently with you and has been begging the team lead for permission to fix the bug you stumbled onto. But chances are strong that your bug, that rarely-seen edge case you found, will keep getting pushed off as new and more crucial work is discovered, and so you still won’t be able to apply that interesting workaround you found.

The Case of Scrappy v. Happy

What’s the lesson here? Certainly the average software development team can’t prepare for every “scrappy path” a user can discover. At some point, realistic tradeoffs must be made, and of course, the bottom dollar comes from the majority of customers whose base needs are being met by the “happy path,” and the few unfortunate edge case users who find their broken “scrappy paths” are out of luck.

But it’s a shame we, as designers, developers, and solutions architects, don’t think about software this way more often. On the development side of the software world, we often say ‘it’s a feature, not a bug” when we’d like to claim that some unforeseen action is actually beneficial. Why can’t we lend the end user the same leniency? The user didn’t “break” the system; they used inputs they were given by the builders of the software to accomplish a task, and the software let them down. This gap is understandable, but that doesn’t make it any better a result. Though I try to strike a good balance, I can’t pretend I’m not also guilty of this oversight, especially when the pressure’s on.

Understand, Build, Deliver

The effectiveness of software in a human world stands to make a much better impact if we think of an end product not as a tool for accomplishing specific goals we proscribe, but as a collection of controls handed to our users. Software gives people a means through which to more easily accomplish their tasks, whatever they may be. It’s less the job of a builder to predict such tasks, and more so to think about how to build a system that allows a user to navigate a conceptual space without fear. A beneficial product gives the user their data and a means to control it, lends enough guidance to keep things secure and running smoothly, and then gets out of the way.

A “scrappy path” is also a “happy path” for one particular user. If you build a system right, it’s also a chance to make that person’s day a bit better.

⇣ expand ⇣⇡ collapse ⇡
Is a Neural Network of Glass an AI?

A recent publication in the Optical Society’s journal of Photonics Research demonstrates how certain computing tasks (referred to in the article as “artificial neural computing”) may be performed using a specially prepared sheet of glass (“nanophotonic medium”). For this paper, a neural network was codified into the glass by iteratively introducing impurities, effectively “training” it to the desired outcomes of the task — in the case of this research, to identify and recognize graphical numbers displayed to it.

As far as optics is concerned, this is an extraordinary accomplishment (at least it seems that way to an outsider like myself!). The applications of such a technology, if it reaches a state where it can be created quickly and to the needs of the moment, include a wide range of tasks such as facial recognition and fast, highly specific computation in low-power environments.

The limitations to this technology do make it likely it will apply to a small-ish domain, since without completely reconfiguring the glass (if such a thing is possible) the neural network remains applicable to the one task it was created for and no other.

Some news articles have seized on this achievement and deemed it an artificial intelligence. I’m no expert in the field, to be sure, but this titling worries me. Is every neural network an AI by default? Or is this really a highly complex, very specific tool? It can be calibrated, but only once. It can compute, but only in the sense that a mousetrap can compute when to spring — though on a completely different scale of complexity.

When you take the fundaments of such a system — any such system — you have a machine which takes an input via some sensing ability, performs computation on that input, and produces an output in the form of an action or record. In the technical sense, such a machine is sufficient to be called an intelligent agent, an AI. Whether the computation occurs by scattered light passing through a glass or by electricity flowing through a circuit — or for that matter, by gears turning — would seem not to make a difference to the fact that you have input, computation, output.

Granted, some systems may store the information in gears, relays, or platter, while a sheet of glass cannot (true for the most part; see for example Lack of storage does make some difference in terms of how we think about a computational system, but may it still be an AI?

I don’t know, and I doubt if I’m qualified to decide the answer to this question, but I do know a trend when I see one and like all trends, it needs temperance.

It’s wonderful that technology has become so popularized and exciting, but stuffing achievements under uncertain headings to fit a trend may not really benefit anyone (except perhaps publishers and academics, who both need the funds).

Referring back to the news article mentioned earlier, I was particularly struck by how certain aspects of this achievement were stated. If you read the line stating “It can also recognise, in real time, when a number it is presented with changes” you may be justifiably impressed that a sheet of glass can detect numbers, and skip by the minor detail that it would be nearly impossible for this process to happen in anything other than real time.

How exactly would a sheet of glass delay processing the light passed through it? There are ways to “pause” light in glass, but that has not been applied in this research. Similarly, it should be fairly obvious that, given a system that codifies an image by reflecting light, replacing the image displayed with another image that the system is also capable of recognizing would produce the same effect. It would be outright impressive if it didn’t, as you’d somehow have to establish a trigger in the glass to respond to the first image shown such that future images, even if the system could theoretically recognize them, would be ignored.

But I rant. The research in this domain is impressive and useful, and perhaps it’s forgivable to somewhat misrepresent it in an attempt to give it a wider audience.

Yet I can’t help but worry when basic realities of physics are glossed over in the interest of sounding impressive, or when important questions are blithely ignored in subjugation to a fad.

The original publication itself never mentions the term “AI,” though most citing news sources do, including the Optical Society’s own site. So maybe I’m wrong; maybe I’m being pedantic. Maybe the researchers had their own reasons for not using the term. But AI is a critical topic at the forefront of our societal picture and the implications shouldn’t be understated.

To quote Confucius, “The beginning of wisdom is to call things by their proper name.”

⇣ expand ⇣⇡ collapse ⇡
A Will to Fail

If you attempt a life goal and fail (the subject doesn’t matter), that’s your choice, and your burden. It’s your choice to try, and your knowledge that you may fail. If you should fail, you may fall back on your routine, or your savings, or the support (mental, physical, or financial) of a friend or family member, or on nothing, depending on the goal you attempted and the level of risk you deemed acceptable (or perhaps you didn’t think of the outcomes at all).

In any case you deemed the action worth the risk.

Typically, the risky ventures, of any kind, are those which stray from the norm. Some of them are good ideas which simply need acceptance or adoption. Others depend on a chain of smaller successes, or other supporting situations to enable them. Yet others were terrible ideas from the start; it’s not always easy to see the flaws at the outset.

Are those risky ventures worthwhile? Should the risk be taken?

We want to say, with clarity of hindsight — no! At least not for the obviously poor ideas.

But what makes an idea so obviously poor? Ideas have been denigrated and gone on to wild success. Others have been lauded and failed miserably. And some others meet their expectations, good or ill. It’s not so simple to know at the start whether an idea is good. Yet surely we must discard at the outset the truly insane notions — or must we? And if a project has potential, is it worth risk? Small risk? Great risk?

Open questions all, but worth turning over for anyone involved in an idea, process, product, or goal that requires an investment. Investment not necessarily just of money, but possibly time, effort, mental capital, favors owed, or other contributions.

When you start a project, whether you have gotten someone to do the thing for you and are giving them something in return for it, or are doing the thing yourself and intend to sink your time and spirit into it, you weigh the risks — most likely you do this already. Are you willing to fail? With even minor risk involved, you can’t succeed if you aren’t willing to fail.

It seems silly to accept the outcome of failure, see it as the end of the path you tread, as your intent is of course to succeed. But to fail is just as likely, or perhaps more likely than success.

For some, the burning intent to succeed lets them leverage their own mentality, and by disregarding any notion of the possibility of failure, achieving a certainty that success is inevitable, they make that success a reality before long. It’s an exceptional trait, but I wonder if we don’t hear more often about those for whom it worked than those for whom it did not. I suspect, though I do not know, that this strategy is not for most.

Consider, at least, accepting the outcome of failure. Not so for the accepted reasons of learning from failure, of gaining new insight, of treasuring the journey, for these (while true) are mostly cited for the benefits of feeling better and starting anew. Accept the outcome of failure instead for providing you an invitation to risk. If you have measured failure, and know what it may be at its worst, and find it tolerable and worth the attempt, then you may jump at the risk you conceive with no fear — the bottom line is known, but the sky is the limit.

⇣ expand ⇣⇡ collapse ⇡
Recent Tweets (@MartinBerlove)