Through the use of Duolingo, textbooks, online tutoring, and immersion, I hope to bring my skills up to a level for day to day communication with native speakers and to read technical papers written in German.
In life, we like to talk about our highs and our lows, the changes and trends, the roller coaster that we live on.
I like to think about these factors as line segments on a graph, and — taking the analogy with a grain of salt — use this approach to think about the quality and progression of my own life.
If we remain cognizant of the limitations of this approach, and the fact that there are many ways to graph such a thing, none of them authoritative, then we gain interesting insights into our own lives, and can take a broader perspective that makes it easier to let the day by day unfold without stressing too much over individual events. We can make changes in life, big or small, in hopes of changing the curve of our lives, and yet not worry too much if things don’t go as planned, since the curve is still being plotted.
Here are some thoughts I keep in mind when it comes to this graphical approach to life’s journey:
Not every high or low is indicative of a trend. Life has many local minima and maxima — and they’re not all inflection points.
Understand the fit of the curve to the larger graph of life. A month can be long, but in the course of your life it is a tiny period. Imagine the overall graph.
Likewise, to stay humble, recognize your curve on the graph that is the totality of history, and observe just how flat it really is.
That being said, the tiny curves may still have a huge impact on you personally, so recognizing the warning signs for an inflection point is a valuable skill — apply pattern recognition to these instances the same as you would to any other, and keep your algorithm flexible.
Understand the subjective nature of the subject you are plotting. Your curve looks different from other perspectives. But for your own appreciation, only the inside view counts.
Minor changes to the value of the independent variable may have unexpectedly large impacts on the output of the function that is your life — no one’s worked out how that particular function works, and everyone’s is different, so take your best shot.
On the other hand, remain cognizant of the fact that not all input value changes have a visible impact at standard zoom levels. Complex functions have tricky mappings.
Balancing the subjective and objective in a subject as complex as life is no easy matter. Taking the progression of our lives to account in a visual way can make it easier to gain perspective, and hopefully, make better decisions as life marches forward.
In the world of software, you often hear about the “happy path” of a product. A “happy path” is the optimal use case, the ideal journey a user takes to accomplish a goal in a piece of software, from start finish. In such a conception, we expect that the user acts intelligently, provides reasonable input to forms and other controls, performs tasks in a sensible and predictable order, and does not stress the system.
Builders of software use this “happy path” as a marker of how the system should expect to be handled.
In the real world, we know such an idyllic use case won’t plausibly cover all or even most scenarios. And it then seems reasonable to ask this: If we know the “happy path” won’t suffice, why do we so commonly explore, support, and even depend on it?
Getting to the Happy Path
A quality software system endures many levels of testing before being released to the user. In a proper release flow, even the least plausible cases have been considered, tested against, and either disallowed from the user or elegantly avoided: the system is robust, and the builders are confident in its ability to handle whatever a user may throw at it.
In practical reality, we often lack the time and resources to handle all edge cases (there are so many!) or to ensure the soundness of each component of the software and how those components relate (the connections are practically infinite!) or to plan against every improper use of the software (users are very creative!).
So when it comes down to the wire and a project deadline is looming, we look at the software, at all its moving parts, and we ask where the line should be drawn. We consider how best to trade off between what ought to be done and what may realistically be accomplished. You can’t ship a broken product, but you won’t gain customers if you never ship the product in the first place.
Thus the “happy path.” It is the core use case, the key way we expect a feature or tool to be used. It is this flow, over any other, that must be supported, prove strong, stable, flawless (or appear so).
Error: You’re Too Clever
This “happy path” method of balancing disparate needs doesn’t always bear out well for an individual user. When someone purchases a piece of software, that person will naturally leverage their new tool however best suits the needs of the moment. But when software is built primarily around a “happy path,” it’s all but guaranteed to let down some user, someone trying to bend the tool to work in a way that wasn’t originally thought of.
When you, as a consumer of a software product, try to do something in a novel and different way, thinking you’ve found a smarter method to accomplish a task, and then the software breaks, the “happy path” tradeoff is probably at fault. Congratulations, you’ve gone off the “happy path” and have encountered your own unique “scrappy path” — one of many edge cases that are difficult to predict and time-consuming to plan against.
You’re not wrong to get upset that the software isn’t working; if an interface allows you to perform some sequence of actions, it is essentially making a commitment that such actions will do something (besides crash the product). And I’d be willing to bet there’s a UX engineer sitting somewhere at the company that produced that software who would agree vehemently with you and has been begging the team lead for permission to fix the bug you stumbled onto. But chances are strong that your bug, that rarely-seen edge case you found, will keep getting pushed off as new and more crucial work is discovered, and so you still won’t be able to apply that interesting workaround you found.
The Case of Scrappy v. Happy
What’s the lesson here? Certainly the average software development team can’t prepare for every “scrappy path” a user can discover. At some point, realistic tradeoffs must be made, and of course, the bottom dollar comes from the majority of customers whose base needs are being met by the “happy path,” and the few unfortunate edge case users who find their broken “scrappy paths” are out of luck.
But it’s a shame we, as designers, developers, and solutions architects, don’t think about software this way more often. On the development side of the software world, we often say ‘it’s a feature, not a bug” when we’d like to claim that some unforeseen action is actually beneficial. Why can’t we lend the end user the same leniency? The user didn’t “break” the system; they used inputs they were given by the builders of the software to accomplish a task, and the software let them down. This gap is understandable, but that doesn’t make it any better a result. Though I try to strike a good balance, I can’t pretend I’m not also guilty of this oversight, especially when the pressure’s on.
Understand, Build, Deliver
The effectiveness of software in a human world stands to make a much better impact if we think of an end product not as a tool for accomplishing specific goals we proscribe, but as a collection of controls handed to our users. Software gives people a means through which to more easily accomplish their tasks, whatever they may be. It’s less the job of a builder to predict such tasks, and more so to think about how to build a system that allows a user to navigate a conceptual space without fear. A beneficial product gives the user their data and a means to control it, lends enough guidance to keep things secure and running smoothly, and then gets out of the way.
A “scrappy path” is also a “happy path” for one particular user. If you build a system right, it’s also a chance to make that person’s day a bit better.
A recent publication in the Optical Society’s journal of Photonics Research demonstrates how certain computing tasks (referred to in the article as “artificial neural computing”) may be performed using a specially prepared sheet of glass (“nanophotonic medium”). For this paper, a neural network was codified into the glass by iteratively introducing impurities, effectively “training” it to the desired outcomes of the task — in the case of this research, to identify and recognize graphical numbers displayed to it.
As far as optics is concerned, this is an extraordinary accomplishment (at least it seems that way to an outsider like myself!). The applications of such a technology, if it reaches a state where it can be created quickly and to the needs of the moment, include a wide range of tasks such as facial recognition and fast, highly specific computation in low-power environments.
The limitations to this technology do make it likely it will apply to a small-ish domain, since without completely reconfiguring the glass (if such a thing is possible) the neural network remains applicable to the one task it was created for and no other.
Some news articles have seized on this achievement and deemed it an artificial intelligence. I’m no expert in the field, to be sure, but this titling worries me. Is every neural network an AI by default? Or is this really a highly complex, very specific tool? It can be calibrated, but only once. It can compute, but only in the sense that a mousetrap can compute when to spring — though on a completely different scale of complexity.
When you take the fundaments of such a system — any such system — you have a machine which takes an input via some sensing ability, performs computation on that input, and produces an output in the form of an action or record. In the technical sense, such a machine is sufficient to be called an intelligent agent, an AI. Whether the computation occurs by scattered light passing through a glass or by electricity flowing through a circuit — or for that matter, by gears turning — would seem not to make a difference to the fact that you have input, computation, output.
Granted, some systems may store the information in gears, relays, or platter, while a sheet of glass cannot (true for the most part; see for example https://www.sciencedaily.com/releases/2018/07/180711093109.htm). Lack of storage does make some difference in terms of how we think about a computational system, but may it still be an AI?
I don’t know, and I doubt if I’m qualified to decide the answer to this question, but I do know a trend when I see one and like all trends, it needs temperance.
It’s wonderful that technology has become so popularized and exciting, but stuffing achievements under uncertain headings to fit a trend may not really benefit anyone (except perhaps publishers and academics, who both need the funds).
Referring back to the news article mentioned earlier, I was particularly struck by how certain aspects of this achievement were stated. If you read the line stating “It can also recognise, in real time, when a number it is presented with changes” you may be justifiably impressed that a sheet of glass can detect numbers, and skip by the minor detail that it would be nearly impossible for this process to happen in anything other than real time.
How exactly would a sheet of glass delay processing the light passed through it? There are ways to “pause” light in glass, but that has not been applied in this research. Similarly, it should be fairly obvious that, given a system that codifies an image by reflecting light, replacing the image displayed with another image that the system is also capable of recognizing would produce the same effect. It would be outright impressive if it didn’t, as you’d somehow have to establish a trigger in the glass to respond to the first image shown such that future images, even if the system could theoretically recognize them, would be ignored.
But I rant. The research in this domain is impressive and useful, and perhaps it’s forgivable to somewhat misrepresent it in an attempt to give it a wider audience.
Yet I can’t help but worry when basic realities of physics are glossed over in the interest of sounding impressive, or when important questions are blithely ignored in subjugation to a fad.
The original publication itself never mentions the term “AI,” though most citing news sources do, including the Optical Society’s own site. So maybe I’m wrong; maybe I’m being pedantic. Maybe the researchers had their own reasons for not using the term. But AI is a critical topic at the forefront of our societal picture and the implications shouldn’t be understated.
To quote Confucius, “The beginning of wisdom is to call things by their proper name.”