Do Your Updates, Part II

Firstly: a new Apple iOS update is out for phones/pads/Macs, and you want to take it *as soon as possible*. Not only does it have a zero day in it, that zero day is under active exploit. This means that a problem is/was identified before a fix was identified (zero days to fix) and professionals are already abusing it (under active exploit). Granted, the typical target of these things are journalists, government officials, etc., but also folks working at corporate offices. Maybe even you.

One of the questions I have fielded since Do Your Updates is best distilled as “why can’t developers do it perfectly the first time”. Aside from the unrealistic expectation that an engineer not be human, there’s a few reasons for this.

  1. The biggest vulnerability in any system *is the humans* and it’s not just the humans building the system, it’s the humans *using* the system. Phishing and social engineering – those emails asking you to click a link urgently or telling you “here’s your PayPal receipt” for a transaction of several hundred dollars (designed to make you panic) are phishing. Social Engineering is more like the person calling you on the phone saying they’re calling from Chase to verify a recent fraudulent activity and asking you for things like your passcode, to verify a 2FA, etc. These methods rely on the target feeling *vulnerable* and have a sense of urgency.
  2. Code evolves and so does technology. There was a time where a very strong password was sufficient to guard your stuff — but then we had data breaches. So then we added 2FA (second-factor authentication, e.g., when you get a text with a code to support your log in) — but then we had SIM swapping. So then we added MFA (multi-factor authentication), physical YubiKeys, etc. etc. — for each fine cat, a fine rat: engineers on the malicious side are not resting, so engineers on the corporate side cannot, either.
  3. We talked about packages and post-deployment vulnerabilities in Do Your Updates. That is still a thing.
  4. There are *a lot* of ways an attacker can poke at the platform or the code:
    • They can insert things into text boxes for forms that interrupt the inbound form contents (e.g., the text box in which you give your feedback on a thing) to try to get into the database in which those contents exist (this can go by a variety of terms and also has a variety of methods, one of which is called SQL Injection and is/was the first thing I learned about cybersecurity, aside from “never share your password”, back in 2002).
    • They can do something called a “brute force” attack which is just like it sounds: employing a variety of clients to just pound the ever-loving crap out of any intake on a site to either force it to give up/let you in and/or just take the site down (Ddos: Deliberate denial of service). 2FA helps with this but so does throttling (making it so that only so many requests are allowed before it locks you out), or Captcha/Re-Captcha. Except now AI can pick out all the parts that are a “motorcycle” in the image, even if you can’t. And so now engineers have to figure out the difference between a less tech savvy person reaching for their paper-written passwords and typing those carefully but incorrectly into the little box, vs. an AI acting as such.
    • They can code up sites that *look* like the site you want to go to and the URL even looks like the site you want to go to — except maybe instead of a “O” it’s a “0” in the site name. You go to the site that looks legit, that the engineer has scraped/copied the design from a legitimate site, and you type. your login as always. Because it’s not the real site, it tells you “oh gosh we need to verify it’s you, please type in the 2FA code” and instead of you sending that code to the real site and doing a real authentication, you are providing that code to the attacker so they can go log in as you.

AI is also not going to solve our security problems — it will make them harder to (as malicious folks have access to AI, too)– but it can help. AI can be used to detect anomalies faster (in most cases you don’t have to tell your bank you are traveling as it employs AI to figure out whether or not that was you booking a 7 night trip to Cancun or not), or even predict patterns for exploits. When it does, it will not be replacing the engineer or even making what the engineer does perfect. This dance does not end.

So do your updates.

Burner

I recently had the opportunity to travel internationally, and to test a few things. Namely, using a “burner” phone.

To be super clear: it is very hard to do this perfectly and I did not do it perfectly. We’ll discuss some hypotheticals further down, but I felt the need to start with that. This was a test, it was only a test, and it went pretty much how one could expect it to.

Why

There’s a lot of discourse in the media about phone confiscation, personal privacy, etc.; this shows up in articles hearing about journalists being issued “burner phones” or the advice to acquire one yourself before international travel. I wanted to see firstly how that would work and secondly, frankly, if I would actually need it. I am not the target demographic for the sort of privacy harassment (yet?) that would require a burner phone (I am not a journalist and I hold no real position of power) so the likelihood I was going to have to hand over my phone to a Cellebrite was small, but not zero. How painful, then, would a burner phone experience be?

Who

This phone was just for me, in my private travel, to talk with about ten people in two countries. The number, once acquired (see “How”), was shared with those people via What’s App and/or Signal. The phone wasn’t used by anyone else during this period.

When

The actual phone was acquired about 3 weeks before my trip which, with life being as busy as it is, did not leave me much time to set up the necessary infrastructure. The plan was to have it set up pre-trip, test it a bit, and then evaluate it for the trip.

How

There are the “right” ways to do this for “ultimate privacy” (and I put that in scare quotes for a reason) and then there are the “okay” ways to do this for like 80% of scenarios, and I went with that one. Firstly, you have to acquire a phone. You could, for example, revive an old one of yours or a family members’, or purchase one off of Swappa. I did the former, but for “perfect” you would ideally do a cash deal off-record for someone else’s phone. Once you have the phone, you need to install a phone plan. You could, in theory, get a prepaid phone plan through a different carrier and in some cases they don’t actually require an ID (as long as you’re paying with cash and/or a prepaid Visa card) but note that everything, on some level, is traceable. There’s cameras at the phone store, there’s call recording for the wireless provider, etc. I didn’t bother with that, I just added it to my current plan.

I will note here that adding a phone to your plan immediately gives it some tether to you. The phone, when added to my plan, got “my name”, and anyone with a warrant, or really good phishing, could probably divine that this “Bobbie Conti” on the phone plan is related to that “Bobbie Conti” on the phone plan. They can also then probably get that other phone number, and my address, which in turn means they would know already quite a bit about me. BUT, the *phone itself* doesn’t impart all of that – in order to get there you need to do that “hop” and either that warrant or phish. Moving on…

If you have an Apple phone – and for security reasons I prefer them – you are best placed to get an iCloud account, so you can load apps and suchlike. For that, you need at least an email address. For a Google email address, they like it if you have a backup email and a phone number for 2FA. So the phone comes first, but where do you get the 2nd email address? Proton mail. Armed with my new Proton mail, and then my phone number, I got a Gmail account and wired that all up to the Burner. Great! I now have a phone, with the ability to load apps, text, etc., that on the surface level isn’t “me”.

A really, really driven person would have gone to a public forum of some kind (e.g., Best Buy when busy and using their demo machines) and used their computer to set up the Proton Mail account, then gone to a second one several miles away to set up the Gmail account, and so forth. I did none of that, but I did use a VPN on the machine that I set them up with. That said, Google almost certainly was able to figure out it’s me, since the machine I logged into was the same machine I use my personal Gmail (note: my gmail is my spam hole and I do not use it for anything important).

From here I did some final tweaking and followed some basic principles:

  • I removed location services from all the things – including even weather.
  • I deleted a bunch of apps I did not need.
  • I installed Signal. Yes, What’s App was on there, too, but if one has to choose one chooses Signal.
  • I did NOT load up any other accounts (emails, etc.), and absolutely did not tether any cards/payment forms to the phone.
  • I brought my own chargers, charging cables, etc. and never hooked up to public USB, nor to any bluetooth.

This left me with a phone I could use to search the internet (Duck Duck Go for the win), send texts/Signals/WhatsApps, and… that’s about it.

A truly driven person would probably purchase, with cash, some Visa gift cards, load those up in the “wallet”, would add in one or more VPN’s, and would almost certainly have not used What’s App. I know what they say about What’s App being private. However, What’s App *can* read your texts if a recipient requests them to, e.g., if you’re getting reported for fraud or abuse. If they can do that under that circumstance, they can certainly do it under others. Additionally, What’s App shares data with other Meta products, so if you are traveling with others who use those, the proximity tracking (and more if those folks are your friends and taking pictures in which you may be, *tagged or otherwise*), it’s not much for them to figure it out.

What

What happened was an exercise in frustration for me, and not much else.

Not having access to “tap to pay”, location services (hello maps!), etc. meant for a substandard experience to the one I could have had, had I had my phone. Instead I relied on others and/or visual directions, and physically pulling out my card to tap it. It also meant I wasn’t getting health tracking benefits, etc. If I had been on a trip by myself and not with friends, the maps/location piece would have absolutely driven me nuts.

The phone itself received generic text message phishing (in this case offering a job), allowed me to text the group I was in, and that was about it. There was no case in which it was compromised, invaded, etc., and there was no indication that someone or thing actually cared about it (other than me). It’s hard to prove a negative, and as I said earlier, I’m not that important :).

The final curiosity was to see if it were to get plugged into the aforementioned Cellebrite on the return trip and… it wasn’t. Not a hint of it. In theory, an Apple phone equipped with Signal and not voluntarily unlocked is fairly “protected” (thus far) from Cellebrite forensics but nothing lasts forever and I would imagine that Cellebrite, having preemptively declared victory in the past only to have to walk back their words would, in future, not advertise a capability until proven. Still, the plan had been to see if any of the account information stored on the phone (with the new emails, etc.) were to show up elsewhere post-plug-in.

Addenda

You could fit the “what ifs” and caveats in this scenario into a small football stadium.

If the concern is a government acquiring the data to do things with it (whatever one might imagine those things to be) then it should be noted that so much of our data is available to JUST ANYONE at any time it’s scary. With a first name and last name, you can search court records, find addresses, see property tax records, etc. With a social security number (which, erm, the gov’t gives you), you can run a credit report, know where someone is banked, and (if again you are said government) know their income and income streams. The things the government would need a warrant (purportedly) for would be specific financial transaction information, and possibly what calls were made at what time and to whom and for how long. If one is to believe the news of the early oughts, the NSA is already listening in anyway. What is left, then, is texts to/from the device itself, the contents of which you have and the person to which you texted have; and either can be forced via warrant.

The other concern is non-government entities or government entities that are not your own and, in my case, again, I’m not that important :). I would imagine the same holes in the process apply to those, if not more. I also generally ascribe to the notion one should not say out loud anything one is not willing to defend in court or another public forum.

The core scenario in which we hear about burner phones (e.g., journalists) are different from mine – I don’t imagine journalists using tap to pay from a burner phone in the middle of a war zone and I don’t imagine foreign officials using said burner phone to send sensitive messages (or if so I imagine some sort of Mission Impossible self-destruct smoke thing happening). For their sakes I hope it works, but my own scenario is nothing so dire.

One should remember the name here, too: a burner phone is so named because when it ceases to be useful and/or is compromised, you burn it; the real purpose of a burner is to get a message from point A to point B and then discard it, hopefully with no traceability back to your thumbs.

You can donate to Signal here.

You can donate to Reporters without Borders here.