Do Your Updates, Part II

Firstly: a new Apple iOS update is out for phones/pads/Macs, and you want to take it *as soon as possible*. Not only does it have a zero day in it, that zero day is under active exploit. This means that a problem is/was identified before a fix was identified (zero days to fix) and professionals are already abusing it (under active exploit). Granted, the typical target of these things are journalists, government officials, etc., but also folks working at corporate offices. Maybe even you.

One of the questions I have fielded since Do Your Updates is best distilled as “why can’t developers do it perfectly the first time”. Aside from the unrealistic expectation that an engineer not be human, there’s a few reasons for this.

  1. The biggest vulnerability in any system *is the humans* and it’s not just the humans building the system, it’s the humans *using* the system. Phishing and social engineering – those emails asking you to click a link urgently or telling you “here’s your PayPal receipt” for a transaction of several hundred dollars (designed to make you panic) are phishing. Social Engineering is more like the person calling you on the phone saying they’re calling from Chase to verify a recent fraudulent activity and asking you for things like your passcode, to verify a 2FA, etc. These methods rely on the target feeling *vulnerable* and have a sense of urgency.
  2. Code evolves and so does technology. There was a time where a very strong password was sufficient to guard your stuff — but then we had data breaches. So then we added 2FA (second-factor authentication, e.g., when you get a text with a code to support your log in) — but then we had SIM swapping. So then we added MFA (multi-factor authentication), physical YubiKeys, etc. etc. — for each fine cat, a fine rat: engineers on the malicious side are not resting, so engineers on the corporate side cannot, either.
  3. We talked about packages and post-deployment vulnerabilities in Do Your Updates. That is still a thing.
  4. There are *a lot* of ways an attacker can poke at the platform or the code:
    • They can insert things into text boxes for forms that interrupt the inbound form contents (e.g., the text box in which you give your feedback on a thing) to try to get into the database in which those contents exist (this can go by a variety of terms and also has a variety of methods, one of which is called SQL Injection and is/was the first thing I learned about cybersecurity, aside from “never share your password”, back in 2002).
    • They can do something called a “brute force” attack which is just like it sounds: employing a variety of clients to just pound the ever-loving crap out of any intake on a site to either force it to give up/let you in and/or just take the site down (Ddos: Deliberate denial of service). 2FA helps with this but so does throttling (making it so that only so many requests are allowed before it locks you out), or Captcha/Re-Captcha. Except now AI can pick out all the parts that are a “motorcycle” in the image, even if you can’t. And so now engineers have to figure out the difference between a less tech savvy person reaching for their paper-written passwords and typing those carefully but incorrectly into the little box, vs. an AI acting as such.
    • They can code up sites that *look* like the site you want to go to and the URL even looks like the site you want to go to — except maybe instead of a “O” it’s a “0” in the site name. You go to the site that looks legit, that the engineer has scraped/copied the design from a legitimate site, and you type. your login as always. Because it’s not the real site, it tells you “oh gosh we need to verify it’s you, please type in the 2FA code” and instead of you sending that code to the real site and doing a real authentication, you are providing that code to the attacker so they can go log in as you.

AI is also not going to solve our security problems — it will make them harder to (as malicious folks have access to AI, too)– but it can help. AI can be used to detect anomalies faster (in most cases you don’t have to tell your bank you are traveling as it employs AI to figure out whether or not that was you booking a 7 night trip to Cancun or not), or even predict patterns for exploits. When it does, it will not be replacing the engineer or even making what the engineer does perfect. This dance does not end.

So do your updates.

Antici…pation

Twenty-five years ago (and five days) I was at a gas station in Oceanside, California. It was something like 2pm and this was the era of TV screens in gas pumps being the Hot New Thing. You couldn’t control what was on them and mostly they were set to a news channel. It was December 31, 1999, and the United States was on the precipice of the year 2000. The world was angsty for a variety of reasons, geopolitically, but also for administrative ones: most computing software (including operating systems) had been programmed for a two-year date. So in 1977 or 1986 or what have you, the developers would have the year of the date stored as 77 or 86, respectively. This wasn’t the case with all software but it was the case in enough places that when 99 rolled over to 00 we would have a problem – HG Wells had written the Time Machine but should enough machines and systems decide it was 1900 instead of 2000 all hell would break loose.

Much as with the traveling barge of garbage, this was a wake up call to folks who hadn’t had to think about the dependency on computing and technology. In 1999 there was email, and you could apply for jobs and get your bank statements and purchase things online, but it wasn’t as default as today – many people still got paper statements, it was very common to get regular mail from regular people, and though we were on the verge of the dot-bombs in 2000 online shopping did not as yet compete with brick and mortar.

The reminders were everywhere: Y2K news stories, mail with updates from every OS and software provider about what they would be asking folks to do to update their stuff. Towers (aka “desktops”, so named because it was a tower-shaped box you kept under your desk or in a separate compartment to it, because your monitor most definitely was NOT flat) came with stickers reminding people to turn off their machines by 12/31/1999 just in case. There was both too much, and not enough information about what Could happen, what Should happen, and what Would (probably) happen.

As we know, in the end what happened was Not Much. The thousands of people set forth (if not millions) across the globe as part of their “IT Departments” (or consultants that would come out, for not every business had one), updated software, operating systems, and sometimes hardware, to avoid the potential disaster.

But we didn’t know that then.

There was palpable apprehension as the world rotated towards 2000, many folks took out extra cash, got extra groceries, had paper copies of everything to Prove what they owned/should have. This was juxtaposed with the idea that as a globe we were headed into not just a new century but a new millennium, that the Cold War was still over (and we didn’t have one again yet in the Middle East), and Europe was doing its collective government thing which looked hopeful. The biggest scandal in the US was that our President had gotten a consensual blow job in the Oval Office and repeatedly lied about it.

And so there I was, pumping gas into a 1996 Dodge Neon, watching the TV screen… and it showed Moscow as the clock turned midnight there. There were fireworks, people partying in the streets… but most of all there was power to the buildings, and amidst the celebrations it looked like everything was “normal”. In my head I figured, “if Russia can get through Y2K, so can we”. Remember, the iron curtain had collapsed, and Russia was in a conflation of oligarchic battles and a seriously unstable government.

Here we are 25 years (and some days) later, and we are again on a precipice — or many of them. What is AI going to do, really, in this next century? How are we (the collective we) going to deal with the impacts of climate change (rising sea levels, increased intensity hurricanes, no-snow winters, etc.). How does the world work without polarity of superpowers (it used to be pretty much one or two — and now it’s more than that). We live in a world where we can now vaccinate against some cancers, treat still others successfully; we can 3-d print heart valves and we have mapped the human genome so successfully you can figure out who you’re really related to with a cotton swab and a relatively small financial outlay. We have meat alternatives and organic farming and bitcoin and electric cars.

We also have increased conflicts, questionable ingredients, vaccine hesitancy and/or denial, four or five wars (depending on how you’re counting them vs “armed conflict” — but someone who dies in an armed conflict is just as dead as someone who dies in a war), a craptastic healthcare system (in which we pay more in premiums and personal outlay than we would in taxes to support a nationalized one, and in which drug makers have essentially carte blanche to set their prices (unlike everywhere else in the world)), billionaires publicly calling the shots (instead of in private like the good ol’ days), and a general decrease in civility in society (it is now perfectly acceptable to be an asshole in public apparently).

In 1899, the world was on a precipice, too; they just didn’t quite know it. I mean, sure, new century; but in their heads there was the Big Global Power (hello, England), your food came from your local farms and may get in by train, if you got a severe infection you very likely died (penicillin wouldn’t be around for another 28 years), and two World Wars and a Great Depression were in the next 50 years. The people who went in to New Years on 1900 had had trains and telephones and typewriters and cars, but they didn’t have planes and space shuttles and computing machines. If you had said the United States and China would become superpowers in the coming century, your peers would have thought you were absolutely bonkers. I’m sure that as 1899 rolled to 1900 there was apprehension and agita much as today: those kids were listening to the radio too much or using paper in class instead of a slate (so wasteful!), the prospect of bank runs was fresh (the Great Panic of 1893 had only been seven years prior), the Boxer Rebellion and the Philippine American war were active (apologies this post is very US centric). Those things that they knew about were, historically speaking overshadowed by the things that came — good and bad.

We go into 2025, that “perfect square” of a year, with a mix of hope and dread, exacerbated by a 24/7 news cycle that is fragmented, biased, hysterical, and algorithmically defined. We can posit, speculate, and make educated guesses at what the future holds. We will not know, though, until it is here.

Tell the Story

About 11 years ago, I left a job because my interests were not aligned with those of the people I reported up to. Not my immediate manager, she was great, but the leadership in that particular organization was interested in “telling the story”, and I was not. Allow me to explain: by “tell the story”, they specifically meant altered data to fit their preconceived narrative.

Storytelling serves a purpose: it provides a grounding for people to understand the message you were trying to convey. Most of the stories that we have align to some form of learning mechanism: either about human nature, or what to do or not to do in a situation, or why a particular belief is correct. Stories are not always accurate, but they are a useful tool. I have no problem with “telling the story”.

The difficulty for me is when somebody wants to tell a story for which the data do not match. The data tell story A, and the person wants me to tell story B. There are people who can spin an A to B, who can make silk out of a pig’s ear or gold out of flax. I am not that person. If it is in fact, silk, I can wax on relentlessly about the properties of the silk. If it is in fact, gold, I can illustrate all the ways in which that gold can be used. I am not going to tell you that flax is gold. Flax has its place, and it can be useful, but I’m not going to tell you it is something that it is not.

This “tell the story” requirement was handed down repeatedly, in various business meetings, over a six-month period and it drove me nuts. I was “mad” in the traditional sense, and I took the first job that presented itself to me in order to get out. This was a rash decision: it meant going to a place where I took what ended up being a pay cut, for work that ended up changing in charter. I lasted at that job exactly one year, before coming to where I am now. Or at least the company that I am in now.

I continue to hear the “tell the story” requirement, through various roles. In program management, you are often required to tell the story: in a technical way to engineers, and in a less technical way to management (depending on your management). It’s a sort of translator function: I enjoy it, particularly in the role I’m at. This is because I am not asked to fabricate a story, rather I can take the data presented and tell the *actual* story.

The thing is, that six months did so much damage in my head, that every time I hear “tell the story”, it rankles me. I remember being asked to change the data to suit the narrative that was provided, rather than the one the data told. As we increasingly have more immediate, multiple, and popular social media platforms, the desire to “tell the story”, and the use of that phrase, increases. The rankle in my brain also increases.

A further complication is that there are a seemingly endless supply of people who are willing to tell a story, to illustrate a point, that is not based in any sort of data or fact. Or, perhaps worse, are based in cherry-picked facts, ignoring other data (“oh those are outliers”). They would not survive peer review. Sometimes you can see it right away, and sometimes you cannot; this leaves the audience to bicker amongst themselves as to what counts as real, and which stories are right.

We are, as ever, in an election year. Technically speaking, every year is an election year: it’s just that most people tend to focus on the ones that happen every four years, as they offer a change in the highest offices of our country, as well as the entire House of Representatives, and about a third of the Senate. There are other posts and positions up for grabs as well, and ballot measures that fund schools, and fire departments, and port commissioners, and judges, and all kinds of roles. Most often, the stories we are inundated with are for the highest roles, though there are smaller stories for smaller roles as well. We are left to pick through the stories, and look for the data, and “do our research”, which is rather difficult in the absence of real data, which itself has been supplanted with stories.

There comes a time where every story ends. The book closes, or the campfire gets quiet, and you are left with the story in your mind, and the choice to do something with that story. You can take the analogy, you can take the lesson, you can take the idea; or you can leave it. The important thing to understand, is stories are just stories: they are one of several means of conveying information. It is up to the listener to understand the nuance, and the context, of that information, before making any decisions.

The Cost Basis of Non-Monetary Recognition

Recognition, without an understanding of the value of it, is worthless. Or at least, discounted.

On one hand, it’s a bit daft to say that: the very definition of recognition (say that five times fast) in the sense of a positive acknowledgment is “appreciation or acclaim for an achievement, service, or ability” (per the OED). If you do not understand or value what is behind that appreciation or acclaim, it is difficult to understand or value the recognition itself.

Sometime between 2009 and 2012 at the end of a PTA year I was awarded the “Golden Acorn“.

At the time I was awarded this I did not know what it was. I mean, I got a nice certificate, and a cute little pin (indeed, a little gold-colored acorn with WAPTSA on it – Washington PTSA), and everyone clapped, and it was nice. I still had no idea what it was. I was thankful of the clapping and of the little get together our PTSA had, where folks were verbally recognized and got little certificates and we put to rest another PTSA year. (I was on the PTSA board from 2008 to 2021 – the years, they blend together).

I still had no idea what the Golden Acorn was. I didn’t have a background in its value, or understanding of its place, priority, or frankly, point. I mean, thanks for the recognition in the meeting, but did I need a tchotchke? Not really. Did I ever really look it up? Nah. I still have the little golden pin in my “collection of weird little things I’ve acquired” drawer.

Last night we (the Royal We) finally got around to voting – in Washington State voting is done by mail, so the three of us dutifully sat around the dinner table, one with their computer up to do research, one reading the voter’s pamphlet, and the third asking pointed questions here and there (and/or running explainers when needed). As part of this the pamphlet reader would read out the position, education, community service, and qualifications of each candidate. We found two Golden Acorns in there.

It was hard for me to figure out why those would be so declared on a voter’s pamphlet, nestled among information like where someone got their JD from, or which Rotary club they were board chair of. To me, this was a chintzy little pin and nice piece of paper that I was certain no one outside of my little PTSA would be familiar with. I was wrong.

Here’s the thing: because I didn’t know this, the *complete value* of the award went over my head. Had I known and understood what it meant, I would have written thank you notes (I am not joking). I would have been much more humbled. Heck, it’s 10-15 years since I got this thing, and it’s tickled my brain repeatedly in the last 12 hours. Yet at the time I didn’t know the full value of the award and therefore the full value of the recognition escaped me.

Recognition in the workplace takes many forms: you can get a shiny new title. You can get money. You can get your name checked in large bold font across emails or reorganizational announcements or “shout outs” at meetings. You can get pizza lunches, Door Dash gift cards, or even 20-sided die. Unless the person *receiving* it values those things, though, it’s not as impactful as one would hope.

This is further complicated by the fact that not everyone values the same things. Some of us are more mercenary than others and straight cash will do, thank you. Some of us like our name in bold letters more. Some of us are food-hounds. Leadership therefore has a tricky problem: how do you properly recognize and individual, or a team, in such a way that *they* value it? In large teams — where you have hundreds of people — finding out if they are more into visibility or cash is problematic; a direct line manager should have that understanding of their team(s) but rolling that up into a nice neat “delivery” that accommodates all is impossible. Even if you knew that person A, B, C preferred money and person D, E, F preferred visibility, once the rewards are out there, minds can change.

The solution, then, is to do both. One of those things costs *nothing* in fiscal terms. It’s fairly obvious that cash rewards (or similar financial rewards: stocks, etc.) has a cost associated and that has to fit within an overall budget for the company, etc. etc. Genuine verbal and visual recognition of folks for a job well done, however, can and should happen publicly and directly. While folks understand the value of a dollar, they need to understand the value of the non-monetary recognition as well.

What does it mean for this VP or that VP to call out your name? What does it mean to have your name identified in a given mail, or proffered in a given meeting? And how is that meaning, and value interpreted based on its origin?

The Golden Acorn award I got was meaningful at the time (and still now) because my *peers* and my board chair were the ones to present it. That it was backed by state PTA was not known to me at the time and now the value that that imparts is a calculation of breadth and an understanding of rarity – there isn’t always one per PTA per year, and its value is understood across the state in the context of PTA. Similarly, the value of verbal or visual recognition (in *addition to* the practical rewards of money) is directly related to the *recipients* understanding of the breadth and rarity of the person or entity providing it. If I don’t know you and/or understand what it means to be praised by you, the value of that praise is somewhat diminished from what it could be.

Engineering

It’s not often that I’m struck by something on LinkedIn that makes me think. That sounds bad; let me rephrase: it’s not often that I’m struck by something on LinkedIn that leaves an impression that lingers in the back of my brain after I leave the page. Usually, it’s a celebration of folks getting new jobs, folks leaving old jobs, folks looking for jobs, and a smattering of posts recapping job-like events. Sometimes there are adages and platitudes and we can all resonate with that image of the bent tree that ultimately succeeded or whatever.

It’s Boxing Day, or the Day After Christmas, and I’m poking around corners of the internet while waiting for the Nth load of laundry and figuring out how I’ll keep myself occupied for the next few days (yes, privilege). And so I found myself scrolling on LinkedIn, for this post by Nick Costentino. I don’t know Nick, we are “once removed” via a connection I have (or perhaps more than one, that’s the nature of LinkedIn), but this title, and this post, stick in my brain: “As a Software Engineer, you don’t need to know everything.”

Nick goes on to illustrate that good software engineering is not about having all the answers and/or “just knowing”, it’s about problem solving and being resourceful. It’s about having the *framework* (not in the software sense but in the “I can wire my approach to this” sense) to identify and solve problems. And my off the cuff reaction (which I commented) was: this isn’t just for software engineering; this is for life.

Depending on your geography, family affluence, and other circumstances, you got an education in your formative years. That education may have had you learning cursive and doing geometric proofs and diagramming sentences and such, but for anywhere from 10 to 15 years you were formally trained in Things Society Felt You Should Know. A *good* education didn’t just leave it at that, a good education taught you how to work with circumstances that were not solvable by rote memory: what is the scientific method, after all, if not “f*ck around and find out”? The idea being that instead of churning you out at the end of high school or college/uni with everything in your brain and it being 100% “full”, you were instead armed with concepts, ideas, and a method of approach to solve problems and self-manage.

I am not saying that that is the way it is for everyone — “No child left behind” left a *lot* of kids behind, and the current systems in place are highly differential depending on socioeconomic factors. Broadly speaking, however, people come out of high school and/or further education with the impression that they should *already* know everything and it’s just a matter of grinding your way to the “top” (whatever top that may be). And that the path is set for one’s career, and intransigent.

Careers are fungible things, and so are brains.

You will not, ever, ever, ever know all the things. There will be edge cases, there will be corner cases, there will be So Many Times you are working with Not All the Information and frequently it will be because either someone you were relying on for it didn’t know or because some process or person thinks you didn’t need it. Or the systems in place were developed by someone who left five years ago, and no one can read their notes/handwriting (if indeed any were left). This isn’t just in the software engineering world: I have had the luxury of having a few different “careers” in the last 30 years, and in every one of them I can point to a circumstance in which the person who should have known everything (the Vet, the Pharmacist, the Travel Agent, the Manager, etc.) did not know everything and what we all had to work with were some clues and guidelines and our very best efforts. Anyone who has been handed the curveball of an unexpected medical expense, your car breaking down, mystery crumbs on your kitchen floor, or any myriad of things in Being an Adult in the World has experience with the “I don’t have all the information, but I have to deal with this” scenario.

Education is *a* foundation, from which your brain gets wired (with experience) on how to approach the crazy that life throws at you. May your frameworks be resilient and resourceful.

Diamonds and Graphite

[Edit: Math]

It’s a time of nontrivial pressure here at my work, as we arrive at the end of one Fiscal Year and on the precipice of another. Like a calendar New Year this invites all sorts of process of evaluation and review and planning, meaning that if you are a front-line manager you are currently juggling the evaluation of your individual team members (both career wise and performance wise, which I would argue are two different things), the evaluation of the team as a whole (did we do the things we said we were going to do and if not why not), the planning for what the team will do in the coming period (as informed by previous), and the budgeting for that plan (which… is a bit more constrained everywhere). Every year we “kid” ourselves that come the new Fiscal Year things will Calm Down because we will have Sorted Out Last Year and we have a Plan For Next Year; and in some cases, that’s legit. In others, it is an invitation to self-delusion.

One analogy I hear a lot is how you can’t make a diamond without pressure. Sure – that is principally correct, if you want those carbons matrixed such that they create a 10 on the hardness scale and you can use them to be effective tools to cut other things (e.g., industrial diamonds) or inspire awe and avarice (e.g., diamond adornment) then rock on: apply your pressure to that carbon. It’s expensive, but the end product is useful, and sometimes pretty.

You know what else is made of pure carbon? Graphite. The stuff in your pencils (whether they be Dixon Ticonderoga or mechanical pencils) is graphite, and it’s *elementally the same* as diamond, it’s just configured differently. If a diamond is matrixed carbon, graphite are cellular sheets of carbon. (You can see diagrammatic and explanatory differences here). While graphite requires pressure too (about 75k lbs/square inch to form), Diamonds require tenfold more pressure.

Diamonds and graphite are measured differently — even the goth diamonds (industrial diamonds) are priced in carats (about 0.20 oz) and graphite is priced in ton (one ton =2000 pounds, 1 pound =16 ounces, so the differential there is 32k). Industrial diamonds can be priced as low as 12 cents per carat (I’m using industrial diamonds here because they produce work, vs other diamonds are for “art”). Graphite is running about $2281 per ton. In terms of value, graphite is then about $1.10 per pound and industrial diamonds are about $1.92, so the price difference is about a fifth of the pressure difference.

You get what you pay for. But what do you want?

You wouldn’t, for example, use an industrial diamond to sort out your notes, to sketch things, to use as a heat sink for your laptop, for use in a battery, to reinforce plastic or to deflect radar; you wouldn’t use graphite to grind or cut things. The pressure exerted produces a fundamentally different material and you use the material differently. The markets are also different: the Industrial Diamond market is projected to be $2.5bn by 2028, Graphite is headed to $25.7bn in the same year.

Which is a link-and-fact-ridden way to say that if you are valuing the pressure for the pressure’s sake then you are not valuing anything at all. You can hone a clump of carbon into a very, very specific tool with very, very specific use cases in a narrow-ish market (again, unless you’re doing it for “art”), or you can use about a tenth of the pressure and get a fundamentally broader application from your toolset.

If the metaphor hasn’t hit you with a carbon-fiber baseball bat yet, here we go: reveling in the volume of pressure applied to personnel for the value that “people work well under pressure” and “you can really see the value people provide when they are under pressure” is a detrimental and flawed approach. If what you want to do is hone that particular person into that particular niche, understand that you are developing a very, very hard matrix in that human that will allow it to go and cut things and grind things but at the expense of its ability to buffer things, to connect things, and to elucidate things. Or if you are asking the human to do both of those things then you will get neither well.

Sure, people are not elements (well technically people are elements, collections of them, but whatever). People have the ability to compartmentalize, to have sentience, to make decisions, and to make choices. Some of us were hardened and pressured and then had things written in our reviews like “bodies in the wake”; it takes a lot of hard work on the person upon whom pressure was applied to de-matrix their carbons and get to those nice flowy sheets (Years. It takes years, trust me).

We should not spend all of our efforts trying to create batches of diamonds alone, and we should identify and appreciate the need for graphite.

Closure

We’ve all had that friend with that messy relationship that doesn’t end well, and someone ends up seeking “closure”. And the closure-seeker is usually denied that: the other party has ghosted or cannot or will not give the answers the closure-seeker needs.

Closure is not for the person who left, closure is for the person who is left behind.

With the volume of layoffs out there, there are those who are leaving (and that sucks), and then there are those who are left behind. We need to acknowledge that also sucks. There are (broadly) two sets of folks left behind in the workplace after layoffs: managers, and individual contributors. Much as with Now What, this is the best I can do (for now) with some things to think about:

Individual Contributors

If you’re an “IC” it means you aren’t managing anyone but yourself, and your workload. In a post-layoff world, that’s a lot to manage, because you are also having to manage your response. You probably have survivor’s guilt: a combination of wanting to know why specific people/groups/etc. were picked, replaying in your head what decisions you would have made had you been in charge, worrying about if another shoe is going to drop, and trying to figure out what it all means. Things feel a little unstable, and that seems to seep into your everyday work, even after the team meetings and frank conversations have subsided.

  1. Understand that you will not get answers. It’s rare that the full weight of the decision-making or rationale will ever be exposed and you’re likely being protected from some uncomfortable choices that someone else had to make.
  2. Understand that it is not your fault. I grew up with the adage that “you cannot say it is not your fault if you cannot also say it is not your responsibility” and frankly, if you’re not a manager, you were not part of any decision-making process, and therefore it wasn’t your responsibility, and therefore it wasn’t your fault.
  3. Seek to control what you can control. You can control your response. You can set boundaries in your work and personal life. You can (hopefully) provide input *to* your management about what the next steps can/should be as you see them.
  4. Take a breath: this is unpleasant, yes, but it also affords you a swift spiritual kick to the gut: why are you here? I mean, yes, why are you here in the cosmic universe, etc., but also: why are you in this role, doing this thing? Do you still like it, even with its ugly bits? Is it time to *plan for* (not execute) a change in the coming months/years? What would you need for that, or if it isn’t time for a change, what do you need to double down in your current space?
  5. Give yourself time to grieve. Grief processing looks different for each person; in my case I carefully box it up and put it ‘way down while I focus on tasky and strategic things and then it blows up in my face some months later. I do not recommend this approach, but I identify with it.
  6. List what you learned. Especially if this is your first experience with layoffs, pay attention to what you learned – how did you respond, what do you wish you had prepared at home or in life for this, what conversations did you have to have at home or at work and what did you need or want for those?

Managers

Congratulations! You get to own the message. You may or may not have had a direct input into the decision-making process, but you’re in it and must execute on it, and now you are down one-to-N team members, and you have folks on your team who are scared, disoriented, or frankly freaking out. Typically, layoffs come with a “redirection” or “new focus” so you get to manage your team not only through this massive change in their/your dynamic but *also* potentially with new or altered charter.

  1. Acknowledge the elephant(s) in the room. Yes, there are/were layoffs. Yes, people are impacted. No, you don’t have answers and/or you can’t give answers. Yes, it sucks. Give yourself, and your team, a space to vent, ask questions, and work through their stages of grief. If you are only going to open that space for one meeting and move on, that’s your call, but be transparent about it.
  2. Support your team. This means providing that vent space, but also reminding them of any work benefits that provide therapy/counseling, reminding them of the need to take time, acknowledging the new work dynamic and doing your best to answer their questions about how things will work in the future.
  3. Clear, consistent, and candid convos: You do not have all the answers but that doesn’t mean clamming up is a good idea. There are going to be tough discussions ahead: who works on what, what work drops, or if somehow the expectation is that you do more with less, be candid about it. Euphemistic handwaving about a “brave new future” isn’t helpful when it comes with the same ginormous backlog.
  4. Recognize growth. As the team progresses through this event you will see signs of improvement and/or growth; recognize it and publicly appreciate it. This isn’t to say there won’t be folks who take longer to get through it, but when you see signs of progress do acknowledge it: grit deserves recognition.
  5. Everything that applies to an Individual Contributor also applies to you. Meaning, you need to give yourself time to grieve, you need to evaluate how you will approach this or what you learned, you need to take a breath. You did not stop being a human being when you became a manager and you may need to remind yourself of that.

Change Management, Part II

Following up on the earlier post, as I have had Spare Time TM courtesy of a bout of COVID.

The Ripple Effect

I failed to mention previously that Big Changes tend to have ripples, and much like when you throw a rock into a pond and then another rock shortly after it the ripples sort of crash into each other, creating other ripples, is how post-major-change ripples go. For example: you have broad reorganization A – let’s say whole departments move, charters move, Big Changes happen. That’s the first rock.

As the ripples from the first rock stretch out to other parts of the water, things in that part of the water get impacted — in this case, there’s the tactics of administrating to a reorganization (changing of cost centers, migrating of resources, identifying process or people gaps, revising projections, etc.) and then there’s the tactics of reacting to a reorganization (I had guaranteed funding from your team to do X, you have gone through a reorganization, is my dependency on you at risk). After enough buildup of these ripples, it often comes to management’s (correct) mind that another reorganization is needed, to account for the things that weren’t immediately derived or attended to with the first one. This “aftershock” reorganization is typically smaller, more nuanced, and often has better details worked out (direct reporting lines, accounts for previously identified gaps, etc.). Perhaps pedantically, this aftershock can breed additional, smaller aftershocks (or, additional, smaller ripples) that eventually calm down as they extend through the system. Depending on what time of year The Big One hit, the Little Ones can extend 3 to 6 months afterwards.

Driving To Clarity

The unloved but absolutely necessary job of the shitbird.

I’m sorry, there’s no better way to put it, although LinkedIn me wants to change “shitbird” to “change facilitator” or something; the bottom line is that oftentimes the people who have to drive through the stickier parts of the ambiguity pursuant to a reorg (particularly when we are talking about things like charter, support, keeping programs running, transfer of knowledge, transfer of understanding (those are indeed two different things), and so forth) are incredibly unpopular because we are often the ones pointing out the un-fun things to be done. For example, if the reorganization of people and charter does not equate to a clean reorganization of resources, there’s typically a lot of tedious work in identifying which resources go where, which ones can’t move until they’ve been reviewed, etc. In a world where development teams are already stacked with features and fundamentals work, the tactics of a reorg often present an unfunded mandate and are not usually expressed in cost of hours (e.g., this reorganization equates to N developer hours spent on the tactics of the reorg).

Note I do not say “wasted”. The time spent inspecting and enabling a reorganization to be successful is *not a waste* if done transparently, with understanding of the purpose of the reorganization, and in good faith. Like any effort, there are costs to that effort; the overall reorganization ostensibly results in greater long-term efficiencies, development or productivity. There is a short-term cost, however, and I’ve yet to see any reorganization actually attempt to size the cost and get better at sizing and predetermining the costs associated.

Tactics vs Strategy

Thus far all of my conversation here has been about “tactics” because the reorganization itself is the output of a strategy decision, and the implementation and administration of the reorganization is all tactics. But should it be?

I’m fairly certain that my company is not the only company to regularly shift resources, assets and charter in a near-constant effort to get better: we are a for-profit company and like sharks you either swim or die. We spend money on things, we want to be as efficient as possible for the best possible outcome, and ostensibly every reorganization is made with that goal in mind.

In a world where this is the case then it occurs to me that, by now, there should be a playbook for these things: how to determine the lines of the reorganization, how to pre-identify some of the impacts (both proactive and reactive), and most of all size the costs associated. Those costs need to be juxtaposed with the previous planned expenditures and weighed accordingly – you cannot absorb the impact of moving a thousand people around with no delay in production or productivity; to do so is either specious or obtuse.

One could argue that we cannot get to the impacts of the proactive/reactive tactics to a reorganization because the people who tend to understand these pieces best are too close to the ground – they cannot be trusted, in advance, with the knowledge of the pending changes enough to provide sizing of impact, and so it’s better to let the reorg roll and then “just deal with it”.

If you cannot trust your team to size things in advance, that’s probably a signal to pay attention to. Let’s ignore that for now, because that’s not what we’re talking about here (but we will, later).

You can have some aspect of both worlds.

The Strategy of Shuffle

Working with the fait accompli that a reorg is coming, you cannot (for whatever reason) pre-plan the reorg transparently with your organization, and you have to land the message and then pick up the pieces: approach it as strategy.

Because this isn’t the first one of these you’ve done, and it won’t be the last.

Playbook

If you don’t have a playbook, build one. Literally start building one by capturing the experience of the pain of the tactics of this reorg:

  • What were the hardest parts of the implementation?
  • What were the things you didn’t plan for?
  • What were the things you planned for that didn’t actually happen? Or didn’t turn out the way you thought?
  • How much time did your team actually spend implementing the reorganization?
  • What projects for that period ended up being delayed (either directly or indirectly)?
  • Did any of your KPI’s suffer?
  • Did your OKR’s have to change?
  • How did your employee satisfaction scores change before/after/6months after 12 months after (for those who were part of the cohort before and after)?
  • What volume of attrition could you directly or indirectly tie to the reorg?

You’re already having to absorb the tactics of the specific reorg you’re undergoing right now, you may as well track this while you’re at it.

Sharing

As you’ve captured all this information, be transparent with it – share it with your team, share it with your management, share it with your impacted peers, share it with your leadership. None of these things should be sensitive and every single one of them is useful.

“None of these should be sensitive? What if my KPI’s suffered? What if our employee satisfaction scores suffered?”

I would argue that it’s likely anyone seeing this data already has access to it — it’s not unusual for employee health scores to be shared out semi-or-annually, OKR’s and KPI’s by their very nature are shared in a Measure What Matters context, and I guarantee that regardless of what they wrote on their “going away/changing roles” email everyone knows why someone left the team or company.

The transparency and sharing of the data facilitate conversation, they facilitate awareness, and most of all they facilitate the ability to identify areas to improve *next time* — because there will be a next time.

Benchmarking

If you’re thinking, “hey it looks like you’re gearing up to say now that I’ve measured all this and documented it, I should benchmark and improve” then ding! go to the head of the class. Because that is exactly what you (I, anyone in this) should do. If for no other purpose than your own for the next time you go through one of these, to better set expectations and understand the volume of work, and to better approach the tactics of *that* reorg, record what it took last time and use it to inform your experience the next time.

Forecasting

Obviously if every impacted team did exactly this then that would be a heck of a conversation with leadership about (and accrued body of data to inform) the strategy of reorganizing. Armed with the data of the costs pursuant to a reorganization (in time, developer productivity, attrition) vs. the benefits (in strategic pursuit, overarching delivery, etc.) leadership can make better informed and more surgical reorganization decisions. Specifically, armed with data about implementation times — e.g., if Reorg A took a really long time to implement because the volume of entrenched and shared resources was particularly gnarly to tease apart — then when approaching the next reorganization leadership can cast an eye in that direction and ask their middle management (who will be better informed on this aspect but also ostensibly in the Circle of Trust, or at least enough to help message the reorg) to size the effort for this bout and/or adjust their reorganization plans accordingly (move more/fewer people, move more/less charter, etc.).

In turn, much like any development effort, the management team can identify predictive costs of the reorg (if we do X, it will use up about Y productivity, and potentially impact Z project, to N degrees), avoiding many of those unpleasant conversations (or worse, handwavy conversations without any actual data attribution) that happen 6, 8, or 12 months down the line when we’re collectively trying to figure out why something did or did not happen.

Perfect vs Good

A quick note here about perfectionism: it’s good in small doses to get you directionally better at things. It is not a good management philosophy or philosophy to apply to any sort of “benchmarking and improvement” endeavor, which I would posit the Strategy of Reorgs as. Which is to say:

  1. Your first round of reorganization benchmarking will not solve for All the Cases.
  2. Your first or even second set of impact metrics will not be enough data to create a predictive model, but will be enough potentially to suggest correlation.
  3. The practical upshot of this exercise is to fractionally minimize the pain and/or volume of expense with each go.

It’s not going to be perfect, ever. You are welcome to aim for perfection; understand you will oft settle for good.

Which is better than settling for nothing at all.

Hire Learning

I have at various times in my career been a manager, and more specifically a “hiring” manager. Management is a constant improvement cycle — I look back at some of my managerial experiences and cringe heartily, but I saw a good quote I try to employ whilst cringing: the ability to look back on a behavior and cringe means you’ve learned from it and won’t do it again. Or not as much.

The process of sifting through resumes, having “screening” calls, technical interviews, panel or individual interviews, as-appropriate interviews, offers and accepts, is a daunting, involved endeavor and I really, really wish it could be made easier for all – the candidates, the partners in HR, the interviewers, and the hiring manager.

I’ve just finished a round of hiring in my own team (two roles! different disciplines!) and a round of interviews for some other teams (as an interviewer but not hiring manager) and the most consistent thing I’ve observed is the sheer volume of nerves and anxiety involved. This stems from a positive place: as the candidate we’re nervous because we really, really want this role. It may be because it’s got the technology we want to play with or the skill set we want to enhance or the team we want to be in or the organization we want to be a part of or it may just be because it pays well, and money makes things work. (These are all acceptable reasons to go for a job, by the way. There is no shame in declaring you want to get paid and paid well.) We’re used to understanding this anxiety from the perspective of the person applying for the role; I’ll let you in on a secret: it’s a bit nerve-wracking for the hiring manager as well.

Inasmuch as it is tempting to believe a hiring manager sits atop their chair (or stands at their standing desk) and flicks dismissively through resume after resume, that isn’t it. For the hiring manager, this is an exercise in making the best possible choice: the role is open because someone has vacated it or because you have identified the need for it based on a backlog of work. In either case, every day that role remains open is a day that the needs are not met and the volume of stuff to be done grows (along with the pain of the absence). The absence of a human to fill the role is not the only problem, though: the human that you hire is now your responsibility — to foster their learning, their career, and their growth. This is a person you are going to advise and help — and probably help grow beyond what you can give them in this role. *Your* role in their career is transitory, and so the onus on you is to not only find someone who can do the work that needs to be done but find someone that you can help grow beyond that work.

In a perfect world, that is the sole consideration set for either side. The reality is that then another layer of stress is laid upon the effort: speed. How *quickly* can you land that job/ land that candidate/ schedule that interview/ get that feedback/ get the offer out/ get the accept/ get to that first day? Because every day that passes is a day you can lose them to another role, a better offer, a different company.

It is important to lay over this massively privileged stance a healthy heap of perspective: I am fortunate in that I am employed (and hiring within) the tech world, one in which the December unemployment rate was less than 3%. The movement we see is person moving from Job A to Job B, almost always to a better situation (money, location, tech, company size, whatever). If you’re in hospitality that unemployment rate is double. Same if you’re a woman in administrative services, or household support; if you’re a man in coal/petroleum or textile products it’s triple. The people I am interviewing and who come through our portals are folks for whom these roles are a good step up; there are literally millions of people for whom the job search is not anxiety-ridden because they may not get to work with a cool piece of tech but because they may not get to eat. Or they may get evicted. Or (from the hiring perspective) their business will go under (and then they will find themselves on the other side of that coin). The “problems” I face, and to some extent those that apply for roles like mine face, are objectively less problematic than others are facing right now.

My inclination (as an engineer of sorts) is to look at the system within which I work and try to figure out how to make it better — I am that person that sends unsolicited feedback to the teams I work with — like how can we be nimbler about counter-offers, how can we better screen candidates *in*, how can we make scheduling more efficient, and so forth. But as we look at the overall employment health here in the US, we have more work to do.

You get what you pay for.

I’ve been thinking about the so-called the “democratization of information” or “right to information” or just the plain old adage that we went to the moon with less technology than what sits in my pocket and catches emails for me; if I want to know the answer to something then Google is there for me (or Duck Duck Go — browse privately, friends).

This is the same world where we see quips like “Please do not confuse your Google Search with my medical/law/etc. degree.” The same world where one has to look for and describe what “peer reviewed research” means. The same world where “alternative facts” and “fake news” are lobbied in counterpoint.

We have an information pricing problem.

In a conversation with a colleague we were discussing school adventures — ours — and up came terms like “microfiche” and “card catalogue”. Back in my day (with overtones here of “get off my lawn you kids”) if one wanted information one had to go to the library to get it — you went armed with your topic (say, Earthquakes) and went to the card catalogue and first searched by subject and then narrowed it down to one or more books/items that had information about that subject. Each item was printed on a card, with the name of the item and the author(s) and publishing information (in fact, cribbing from that card is what typically got you your bibliography). You then followed the Dewey code for that item and went looking in the stacks to get the item in question, and then you had to actually read the whole item even if you were for example going to cherry pick things to meet your needs. (Card catalogues have been around as “the way to find things in the library” for over 100 years, so what was true for me was true for my parents and theirs and theirs and so forth).

Microfiche was even more involved — if your item was microfiche then you had to take it to the librarian or look through the drawers for it, put it into a special machine, and scroll through it until you found the article or print you were looking for. Microfiche has not been around as long as the card catalogue, but it’s coming up on it’s 85th birthday in libraries. Microfiche (and film) is still in use, but it wasn’t as snazzy as this when we had to use it. It looked like this. Somehow everything in the 70’s was beige.

This was “pull method” of information – you made the investment and went to the library and invested your nontrivial amount of time to go and get the information and glean it for whatever purpose.

“Push method” — ingestion of information in a someone-else-does-the-bulk-of-the-work-way — was mostly TV (nightly news, from 6:30-7:30) and radio (mostly public or talk radio). This was before blogs and user-based journalism which have largely changed the landscape of the form and presentation of journalism (far less stuffy but far more opinionated). Journalism has a code of ethics that most journalists follow, rando persons on the internet (such as myself, hi/hello) are not bound by those ethics. (I mean, I try, but I’m not formally trained and this is not a professional blog, this is just where I spit things out that are in my brain). It’s important to note however that the “push method” of nightly news and radio, along with relative lack of choice (when I grew up there were at first 3, and then 9, channels) meant that the news you were getting in your home was the same news that everyone else got. The same leading stories, the same local color, the same news from Washington and the world. The accessibility of the news, even with the “scheduling war for news” we saw with the Gulf War, was still relatively uniform.

Which is all a very long way to say that, for the previous 100-odd years, the foundation for information was roughly uniform and the amount of investment one had to do to get it, past that initial uniform bit we got with Tom Brokaw or Dan Rather (or before them, the nightly newspaper), was relatively involved.

My offspring recently graduated high school and he has never known a world where information wasn’t searchable locally at home: every “paper” (for most of them never made it to paper) was researched via internet.1 Everyone I know has a mobile phone that has internet search functionality on it and can quite literally “look up” the answer to any question at any time for any purpose. The *investment* to procure information is drastically lower, information is now astonishingly cheap – and I do mean cheap.

Quick digression – I’m a fan of good diction, this comes from how I operate in the world (very explicitly). If one has a reputation for being specific and direct, one has to choose one’s words carefully because the amount of thought that goes into receiving them is ostensibly higher. When I say “cheap”, I do not mean “inexpensive”. There are a variety of definitions for “cheap” and the fact that “inexpensive” routes to “cheap” according to Merriam Webster is a tragedy. I think someone cut a corner there. The Oxford Learner’s Dictionary gets closer to the nuance I’m looking for. “Cheap” comes with an inference of low quality as to the reason for something’s low price, whereas “inexpensive” (for me) does not.

Here we return to “Do Not Confuse Your Google Search with my XYZ Degree”: The seventeen seconds you spent online “researching” your symptoms do not equate to the years of study (and practice) a good MD has (always get a second opinion, though). How many times have we heard the joke that one goes to google with one’s symptoms and they’re either dehydrated or dying? The issue at hand is while the access to the information has been greatly simplified, the investment required to get to it has also been removed: the knowledge isn’t earned and the context is absent.

I can go watch endless YouTube videos about solving household plumbing problems (e.g., how to clean out your P-trap, remove drain flies, even replace a toilet). This does not make me a plumber. If I elect to attempt any of these things on my property it’s my problem but I sure as heck should not be advising you on yours (nor should you take my advice there except as maybe a prompt to go talk to someone who actually has been trained in this). I do a lot of home cooking and watch a lot of food recipes, this does not make me a professional chef. I read up a lot about the things I contend with (thyroid, cardiological, etc.) but I do so in preparation for an intelligent conversation with my MD’s about it and not *instead of* those conversations, and absolutely not to “guide” others. (Or to suggest to them that “doing their own research” will arrive at the same conclusions.)

The cheapening of information combined with an elevation of User Generated Content to Journalism (a loosening, in my opinion, of how journalism operates — a lot more opinions and editorials) and the breadth of information and information targeting (my “news fix” may not be the same as my neighbors) has led to extreme polarization and, worse, a willful ignorance to information that may not align with our inclinations. (This exists, incidentally, in scientific exploration which is why peer review is so important and why you should always get a second opinion). This polarization is not only political, it extends to our societal behaviors when it comes to medicine (e.g., vaccines in general — not just for COVID) and how we view things like Climate Change (regardless of political affiliation, or perhaps exacerbated by it).

I am not suggesting we somehow lock down information (I mean, that would create a scarcity which in turn would increase the price as supply goes down and demand ostensibly goes up, but that’s a little more 1984 than I think anyone wants). I *am* suggesting, as with any (relatively) newfound2 privilege or boon, we do our homework. Specifically, we elevate the role and investment of critical thinking (in our schooling and as a foundation of education), The information tsunami (and its accompanying hurdles) will not go away and so, much as we should be teaching financial literacy and scientific literacy in schools, we should be teaching critical thinking skills. In a world where information is cheap and easy, the filtration and identification of information of actual value is not.3

The “good” news (?) is that educational standards are set at the State level. Meaning the curriculum requirements for your state are owned by your state Superintendent of Public Instruction (or equivalent). In a world where all politics are local, this can be influenced by your local state representative and local state senator (again: not Federal. You’re not writing to the person that goes to DC, you’re writing the person that goes to your state capital).

Yes, writing. This is the sort of topic that would not come up (or only come up cursorily) during election season, likely drowned out by the myriad of other agita that happens at that time. The very best way to get action on anything from an elected representative is to visit them, which can be impractical (in terms of investment), so the second very best way to get them to look at a thing is to write a letter (like… the kind that gets mailed). Email is your third choice here. Don’t want to go through the pain of finding your state’s legislative site and then figuring out who represents you? Go here — you can find your state (and federal) representation. Here’s a guide on writing legislators. As to your State Superintendent of Schools — sometimes these are elected, sometimes they’re appointed, you can find that out here. (You can also use that link to find your State Superintendent, their office, and their office mailing address and email). In addition, you can get involved through your local school *district*, either directly with the district or via a PTSA council (if you have that kind of time, and not all do).

There is a contingent of folks who will read this who either 1. do not have children or 2. whose children (like mine) have already graduated and are off to their next endeavor. The inclination here is to say “this does not affect me” and therefore no investment is needed. I argue that that is shortsighted and obtuse: you as a taxpayer are paying for the education system and you are paying for the product of that system (its current and future students), who in turn are going to be your future co-electorate. If the purpose of public education is for a well-informed and productive public, then you should be very much incentivized to ensure your investment is well spent.

  1. The teachers explicitly stated not to use Wikipedia as it is not considered a credible source; we taught him to check out the footnotes to find the credible sources and use Wikipedia as a coalescing function.
  2. Let’s just wave a hand at it and say it started with the internet in the 90’s. That’s 30 years, and so we’re at least one and likely two generations behind here already. “Relatively newfound” is overgenerous. We are late.
  3. In a sad turn of events, searching for “critical thinking” (in quotes deliberately to get that phrase), plus curriculum plus legislation, all I got was the never-ending debate over Critical Race Theory, which is a different thing altogether. That and a WaPo article about how Texas doesn’t want to teach critical thinking skills but I couldn’t find a second source.