The first episode in the second series of The Communications Risk Show begins with a review of dramatic news stories that affected the communications industry during the break between seasons. And there has been a lot of drama!

The topics to discuss include: Ireland rejecting STIR/SHAKEN; Ukraine police claiming to have confiscated hundreds of simboxes used to distribute propaganda; the hacking of cloud servers used by a Russian satellite comms provider; the US Cyber Safety Review Board wanting an ‘urgent’ transition away from sending passwords by SMS and voice; big analytics firms denounced for profiting from inaccurate call labels; opposition to a UK law that would interfere with end-to-end encryption; China accused of using eSIMs to secretly track the movements of the British Primeminister’s car; Islamists launching denial of service attacks on networks and mobile money in Nigeria and Kenya; and a legal complaint from European privacy activists that could prevent US businesses developing databases that record the ‘reputation’ of individual phone numbers.

These topics and any questions from the live audience are debated by the show’s three regular presenters, industry analyst Ed Finegold, senior risk executive Lee Scargall, and the Editor of Commsrisk, Eric Priezkalns.

Transcript (auto-generated)

Hi, I'm Eric Priezkalns and this is The Communications Risk Show. We're back. Every Wednesday we're
going to chat with risk experts from around the world of electronic communications, broadcasting
live. So you can also join the conversation, submitting questions and observations as we go
along. To ask a question, just type it into the window immediately beneath the live stream on our
website at Messages are anonymous, so include your name if you want to read it out.
We also stream live on LinkedIn, so feel free to leave a comment over there. A member of our team
will pass them along. I'll try to read out as many of your comments and questions as time permits.
Now I've been missing the chat with the guys. Today's show is a special catch-up episode to
cover all the dramatic news since the end of last season in June. And there's been a lot of news in
the 11 weeks since our last episode relating to a string of intertwined topics like spam,
authentication, mislabeling of calls, customer consent, reputation monitoring, corruption scandals,
police failure, security arguments and on and on and on it goes. So instead of having an interview
with an expert guest this week, we're going to be focused on me and my two fabulous co-hosts
shooting the shit about all that's going wrong in the world. Those fabulous co-hosts, of course,
returning from season one are Ed Finegold, an author, analyst and strategic advisor to tech
and telecoms businesses, who joins us from Chicago, and Lee Scargall, an executive and
consultant who's managed the risks of comms providers in the Middle East, Europe, Caribbean
and Asia, and comes to us today from his splendid apartment in Bahrain. Hello again, Ed. Hi, Lee.
It's great to have you back on the show. I'm looking forward to it. Are you guys looking
forward to it? Are you pumped up for the return of the show? Yeah. Nervous even.
I don't believe it. Nervous. I don't believe it. Yeah. Yeah. It's been too long. I'm ready though.
It's been a while. It's been a while. Now, normally we've got less than 30 minutes to
comment on the week's big news. As I say, we've had 11 weeks of news that we need to cover.
But I still think it's going to be a struggle to pack it in all the guys, despite devoting the
entire length of today's show to viewers, what's happened. And viewers, please do submit your
questions as we go along. Really keen to get that feed. In fact, already a comment here from
Giovanni. Thank you, Giovanni. When are you guys going to discuss the insanity of the campaign
registry in the USA? Oh, I'm loving the controversial topics that our viewers are
already bringing up. Good question, Giovanni. Maybe we'll discuss it later in this show.
So, guys, Ed, Lee, where should we begin? I'm going to pick on you, Lee, to get the conversation
going. You tend to feel quite strongly about the dangers of nation states subverting the
communications infrastructure. Do you believe there's any truth to the claim recently voiced
by a former minister in the British government, Sir Ian Duncan Smith, who said the movements of
the official car for Britain's prime minister, Rishi Sunak, had been monitored by Chinese spies
until some, as he called it, little sims were removed from the car? Lee, does that sound
plausible to you? Yeah, I did see this, Eric. Now, I'm not sure if the prime minister was actually
specifically targeted here, or is it just the fact that the e-sims are becoming more ubiquitous in
the next generation of cars, and he's making this comment. So, the car the prime minister's kind of
chauffeured around in, that's a Range Rover, and that's got these two e-sims in, and these are used
for things like remote diagnostics and navigation. Actually, as well, the top-of-the-range BMWs
that are now sold, I don't know if you know this fact, they come with heated seats,
and if you want them to work, you have to subscribe to them. You've got to pay about $18 a month
on a package to do this, and they're using e-sims to control this, as well. So, if you buy one of
these cars that's got these e-sims in, then you need to know that somewhere in the world, they can
kind of flick on a switch to turn something on or to turn it off. They can also slow the car down,
like we've seen in episodes of Mr. Robot. So, these new connected cars, right, they have the ability
to track your movements and navigation systems, as well, and they can also listen to your conversation.
So, yes, it's 100% possible what Iain Duncan Smith was saying, but, you know, I don't know,
did the Chinese spies, did they actually gain any intelligence from this? Did they specifically
target him? I don't know. Ed, what do you think? Do you think there's any plausibility to
Chinese surveillance agencies monitoring our movements using e-sims in our cars?
So, I always struggle with these because it's one of those, do you think this specific thing
might have happened, right? And I would say, well, I think it's plausible that, so let me say this,
it's absolutely plausible that intelligence agencies would be using any angle that they can,
right, to collect information or have the ability to carry out some other kind of activity,
right? And if you're opening vulnerabilities to that, then I would expect that those agencies
would have their experts learning how to exploit those things. And it just seems to me from a
security perspective, you have to make that assumption now. And we're going to talk about
this more later in the show too, but just the idea that we're connecting all of these things
now suddenly changes the level of responsibility for securing them. And I don't want to spoil what
we talk about later, but I think that's a major issue that's going to keep coming up.
Well, I think we- Come on, guys. You know the answer to this. I don't know why you're being
so cagey here at the beginning of our first show in the new season. Of course they did it
because everybody's doing it. Every spy agency uses things like this. So of course the Chinese
have been monitoring the movements of the British Prime Minister and working out who he's meeting
with. Because if he can see where the car is going, then they can see who he's going to speak to.
And if, and potentially, they see where other cars are going, they can start working out all
sorts of things about where people are. This is where, Lee, you make a great point. We are,
for this, I mean, look, I'm a fan of capitalism, okay? You're not going to hear me make any
socialist propaganda on this show. But the danger of capitalism is they're starting to monitor my
movements everywhere I go just so they can bill me for my heated car seat. That's not a good deal.
I don't want everybody to know where I'm going just so that somebody can try and find a new
revenue stream at a car company. It's going too far. Lee, do we not need some kind of regulation
here to maybe, not just the security people, but actually the government to get involved here and
say, look, you can have an eSIM in your car, but we need to put some very strict controls
about what these eSIMs are used for. Yeah, and I think connected cars,
this is one of the areas which probably hasn't been addressed yet. Okay, we look at things like
IoT, but a car, there's lots of stuff going in a car. And also as well, what we haven't talked
about here is, okay, we talk about kind of surveillance within the car, but you can also
hack that car. You can get into the can of that network. You can stop the car,
you can speed the car up, you can do anything with it if you take control of that. So yeah,
I completely agree with you, Eric. I think this is an area, connected cars in particular, where we
need to have a little bit more government intervention and a bit more regulation around that.
So, Ed, I know you don't want me to pick on anyone particular, but Ian Duncan-Smith,
he was talking about China. He was talking about the authorities in China.
And the thing I found funny about this was, I was down at the GSMA a few weeks ago for some events.
Some speaker gets up, very plummy English speaker, very plummy English accent, probably went to the
right private school in this country. He's speaking on behalf of Huawei, because Huawei's
paying his salary, talking about the importance of security. I leave the events, what's in the
news, what's on BBC News, what's the top story that day? The parliamentary committee that looks
at security in this country, damning report about how China's infiltrated every layer of the British
economy, and we don't have any credible plan to protect ourselves from surveillance that comes
from it. So it's stuck. I go down to the GSMA's headquarters in London, some guy who's basically
paid for by Huawei is talking to me about security. And yet the people who are running this
country are worried that there's too much influence over, that the Chinese have, over our technology,
over the services we have. Do you think, Ed, that organizations like the GSMA have been somewhat
complacent about the extent to which Chinese businesses like Huawei seek to influence the way
networks are secured? Or are people like me and Ian Duncan-Smith exaggerating the threat?
I don't think you're exaggerating the threat, but I also don't know that complacency is the
right word. I think the way to, so look at it this way, the, the way you're asking the question
makes like Chinese manufacturers like Huawei and the GSMA completely separate organizations.
But the GSMA is a member-based organization. And I think 3GPP is as well, to a certain extent. And
so who participates in those member-based organizations, who puts money into them,
who puts people and time into them, right, is kind of what makes up and drives a lot of the
direction of those organizations. And it's not a secret that Huawei's been a major influence in
both of those organizations, both because of money and person power, you know, put into the efforts.
And they've publicized that. I mean, if you look at even the amount of contributions they've made
to the 3GPP 5G specs, the 5G standalone specs and the charging specs, I mean, a lot. So that's where
I think it gets complex here is there's the aspect of it, where if we just look at it from
the telecom or technology industry vacuum and say, wow, yeah, you want everyone in the ecosystem
contributing. And then you put this political and security, you know, national security,
let's say overlay on it. And suddenly we say, Ooh, you know, should the GSMA or the 3GPP have
that much influence, you know, from a Chinese supplier? And I'm afraid to kind of ask that
question. It's like a sensitive question. I think it's fair to say the answer is probably yes,
but also because maybe no one supplier should have quite that much influence over any given
standard or market, right? You can make that argument. If you want to make the security
argument, then yeah, I mean, that's why we have these discussions, because I think you need to
sit down and you have to connect the dots, right? That there's a direct connection between
heavy influence on the standard and some activity that's happening on the national security front
that's a problem. I'm not sure how you connect those dots. That's why I think it's like there's
so much smoke, but it's hard to call out where the fire is. And it's a complex issue to navigate
around as an analyst every day. We've hit that. Well, it's difficult for you as an analyst. I'm
sure it's even harder for Lee when he's making those procurement decisions. Does it ever come
up at all when you're making a procurement decision, Lee, for the telcos that you work in,
that there's some sensitivity about where the equipment is sourced from? Yeah. I think this
is a question which has gained a lot of prominence actually over the last couple of years,
in particular in the UK. It's becoming very prominent now in Germany and elsewhere.
And same too in America as well, where they've banned a lot of the Hawaii equipment. So,
yeah, I think particularly in Europe and the US, it is a growing concern, right? And that's
been around, not just today, that's been around for the last few years. If you go outside of that,
maybe into Africa, Asia, I don't think that is a question which people are asking right now,
or maybe they're starting to talk about it. I think when it comes down to a procurement
decision, it's probably based a lot on cost. Yeah. And I think that's still a big driver
in those parts of the world. Yeah. Yeah. Cost is still the big driver. In fact,
that came up the other day with Germany's interior minister saying that cost couldn't
be used as an excuse for continuing to use Hawaii's equipment. Although the situation in
Germany I find difficult to understand because it seems to be they're almost looking at a kind of
component by component level to work out whether there's something secure. And with the greatest
respect, I don't think you ever get to security if you're trying to look at each individual
component because the task becomes too hard to be achieved like that. It may be unfair sometimes to
generalize, but you need to make some sweeping generalizations in order to have a security
strategy. But let's park that for a moment. Let's keep the conversation moving forward because I
know a really thorny issue. And Lee, I'd love to have your opinion on this again. We're a little
bit Brit focused at the beginning here, but it's worth it because it's totally a topic that's of
huge global significance. Great big fuss in Britain at the moment, a heck of a lot of
academics going crazy about the topic of end-to-end encryption in communications and
the proposal from the British government to effectively create a mechanism, let's call it
like that. I mean, they don't want to be too specific on how it works though. It likely
involved software being installed on the phone to scan messages before and after they were sent,
but a mechanism to effectively work around end-to-end encryption with the idea that this
is necessary to protect children from paedophiles. Lee, before we get into the sensitive rights and
wrongs of the issue, do you believe the threats made by executives like Meredith Whitaker,
boss of Signal, who's come out and said that her business would quit the UK market rather than
comply with that requirement to monitor messages as written into the UK's online safety bill?
I think we're going to have to wait and see on this one, Eric, just to see if they do. But if
you look at the history, there's actually a growing tendency here for tech companies in
particular to pull out of countries for non-compliance over privacy laws. Now, if you go
back to, say, back in 2016, first you had Apple. They refused the request from the FBI to unlock,
I think it was a suspect's phone, and to decrypt the information on it. And then also recently you
had Telenor in Myanmar. They quit because of the privacy measures kind of imposed by the military
there. So I think there is some kind of precedent here. Now, will that change or force the UK
government to do a U-turn on its online safety bill? I don't think it will. I don't think they'll
be concerned by a threat like that. Ed, what's your perspective from the other side of the Atlantic?
Is this something that you admire the British government for doing, for stepping up, challenging
the tech businesses? Or do you think they're completely wrong, that the tech businesses will
walk away from even a market as large as the UK if end-to-end encryption is undermined?
Yeah, I think it's like if you're the only market that's undermining or, excuse me, if you're the
only market that's eliminating or preventing companies from using end-to-end encryption,
and they need that to conduct business for various reasons, we could sit here, I think,
and talk about legitimate use cases like in telemedicine alone all day, right? And that's
something that's grown quite a bit since the pandemic. So that was one of the first things
that came to mind. I feel like if you're not only preventing businesses from engaging those things,
you're probably taking access to healthcare and other services away from vulnerable people who
need it in an effort to try to protect some other group from a group that's misusing a technology,
right? So there's one group of people that are misusing this technology, and you're saying,
you know, no one can use it. And that just seems very draconian. It doesn't really make a lot of
sense. I mean, I think it makes more sense, and this is probably how the regime is structured now
if you sat down or broke it down, and it probably needs to be refreshed. But if you look at it and
say, okay, certain types of communications must have end-to-end encryption, like we were talking
about, like telemedicine types of things, you know, situations like that when you're really
dealing with very, very personal matters, or I imagine confidential data and or control,
right, which is a separate thing, like we were talking about with vehicles,
control systems, especially for sensitive equipment and infrastructure, right? Those kinds
of things, I feel like they require end-to-end encryption. So how could you outlaw it? It just
doesn't make a whole lot of sense. And again, I think you end up cutting off services from people.
Okay, so let's stick on that point for a moment, because you're a parent as well, as a tech
analyst. There's been some very strong feelings expressed from the UK academic community, from the
cybersecurity side of the UK academic community, saying there can be no compromise on end-to-end
encryption. Is there a legitimate argument, though, that in the end, you have to weigh that against
the potential harm to children, if there isn't any police power, if there isn't any legal way
of being able to intercept and read people's messages? Is there any argument for seeking
some kind of balance here, however technically difficult? Or is it impossible to find a balance?
Do we have to come down on one side or the other?
You know, when I thought about this, one example that came to mind wasn't in the digital world,
it was more in the physical world. And it works like this, right? If your children are going to
school, right? And this is a reality, if your kids go to school in the United States, especially in
a public high school, you know, like my kids, they're going to go through a metal detector
when they go into school, and there's going to be severe violence all day long within the school.
And I am 100% okay with that, in that setting, under those conditions, right? Within those
boundaries, I think that's completely fine. I acknowledge that there's surveillance for
police reasons, and every storefront and people with their doorbells and everything else is
everywhere anyway, when you live in a, you know, especially in a major city, you kind of
get used to the fact that that's happening. But in terms of like, actually sanctioned,
you know, active surveillance by a government entity, you know, inside a school, I'm fine with
it. If that was tracking my kid every single place they went, you know, other than school,
for some reason, that's where it starts to have a problem, especially when it comes into your house,
and it's like, oh, you know, we saw your kid was up at 11pm, they should have been in bed by 10,
we're gonna, you know, DCFS, you know, child services is going to show up at your house and
do something. If it, you know, if it got to that kind of a level, obviously, that's, that's
threatening your civil liberties, and I have an issue with it. So on the digital side, I think
it's more complex, right? Because it's harder to know where the boundaries are, as far as that
goes. But again, when we come back around to it, I mean, I think from the academics point of view,
what it struck me as is one is, what is your goal? What are you trying to solve? And two,
if you get a group of people in a room, and I've, this has been my experience with all kinds of
people, including academics, and they need to sign on to something they can feel passionate
about and stand behind, well, it's a heck of a lot easier to do something that's black and white,
as opposed to shades of gray. So it sort of becomes easy to say, yes, darn it, we're going
to stand on this box and say we're against end-to-end encryption because of pedophiles.
Like, okay, you know, that was what you decided when you had your meeting, that that's what you
wanted to argue. But that's, but the academics are saying the opposite, to be clear. They're saying
that end-to-end encryption cannot be compromised at all, even using this method where software may
be installed on a handset and that software may be used to say, look for certain words associated
with crimes that relate to child abuse. And this, look, this is going to be a thorny topic because
this isn't going away because however much people may play the role of experts,
the public themselves are divided in a way that they don't even understand.
If we phrased an opinion poll question and said, would it be acceptable to look at certain
messages because you want to look for evidence of child abuse, there'd be overwhelming support
in the polls for that. If you then try and phrase the question about the government having,
you know, surveilling your activities and being able to monitor where you're driving your car,
things like that, of course, the opinion poll outcome would be a contradictory opinion poll
outcome. In fact, and you know, this is something that's going to come up in some of the other
things we're going to talk about this season. If you look at, say, what's happening in Ireland
right now, very good series of proposals for how to deal with scam messages. One of their seven
big proposals can't be implemented at this point in time. Why? Because it's not legally permitted
in the European Union to be scanning people's messages, to be looking for certain words.
And yet we know that if we went to the public and say, we put in place this technology,
it's going to scan for all the scam messages. When there's a scam message that fits this
certain string of words that we're looking for, we'll block those messages for you. The public
will go, great, fantastic. Thank you very much for doing that. We're glad to have it. So there's
some contradiction here, isn't there, Lee? There's some contradiction in terms of the public will want
technology that intercepts and intervenes your communication sometimes, won't want it other
times. It's very difficult to come to a definition that keeps everybody happy. Perhaps we are too
often contradictory ourselves, Lee. Yeah, I think we all, the whole thing is contradictory.
I think what fits for one person doesn't fit for somebody else, right? Now, personally, I have no
issue. If I know a computer or a machine is reading my messages and it's not a human being,
and it's there to protect me from receiving a smishing message, which I could just inadvertently
click on that link, and they've got my banking details, and then that's a big issue, right? And I
think so long as it's there to do good, then I think the public will accept it. Interesting. Maybe
a controversial comment here from one of the viewers. Given that phone calls are unencrypted
while still access to them is restricted by privacy regulation, isn't the argument for encrypted
messaging sessions questionable? I really think that there's a lot more in this that can be
explored because it tends to be presented as black and white, and yet the more you explore
these issues, the more you find that people are in contradiction themselves in terms of what they
want. But we're already running behind schedule, so I'm going to move on with today's conversation
by referring to one of our weekly sponsored features. It's the Symmetry Prism Fact of the
Week, an interesting fact supplied by the team at Symmetry Solutions and their Prism Fraud
Intelligence Service. Now this week's fact relates to crime and phone numbers. You already know that
fraudsters acquire phone numbers in order to commit crime, but did you know they sometimes
acquire phone numbers that are only supposed to be used for restricted government purposes,
such as numbers belonging to the same ranges as the emergency services? The team that gathers
intelligence for Symmetry's Prism Service has identified instances of numbers being sold to
criminals that begin with 911 or other numbers where commercial use is supposedly prohibited
by law. This shows that criminals are willing to constantly push the boundaries in order to find
new ways to trick unwary phone users and work around controls implemented by comms providers,
so make sure your controls are as comprehensive as they can be by using Prism Intelligence to
identify all the abusive calls made on your network so you can stop them before they cause harm to
your business or to your customers. Learn more about Symmetry Solutions and their Prism Fraud
Service from their website, Now back to the chat and let's talk about a story
that sent shockwaves around the global telecoms industry, not least because a load of vendors,
in fact I've noticed recently some award-winning vendors, have been insisting the literal opposite
to this story will happen. Just a short while after the final episode in our previous season,
Ireland's communications regulator, Comreg, issued a plan to deal with scam calls and texts
that rejected StirShaken, the expensive technology for call authentication,
on the basis that, and I quote, many of the nuisance calls in Ireland are generated offshore,
so there would be little value currently in implementing these standards in Ireland on its
own unless it became a globally adopted approach or the balance of nuisance communications swung
heavily towards onshore generation. Now Ed, a bunch of firms, they've gone out of their way
to tell every country that StirShaken would inevitably become global and have sold their
value on the basis that if it becomes used by other countries then you receive the benefit too.
However, last season we had experts like Professor Feng Ho from Warwick University,
he gave a solid counter argument, he said that this is an unrealizable dream that there would
be interoperable versions of StirShaken worldwide. Has this situation, has this decision by the Irish
regulator proven to be the ultimate stumbling block that kills off the dream of StirShaken
becoming global? I don't think that it will kill StirShaken just because there's so much,
there seems to be a lot of political momentum, a lot of people already seem to have, you know,
vested stake in StirShaken, so I think it will continue to move forward, but I do think it's a
sign, you know, I'd like to think, having spent some time in Ireland and having friends in Ireland,
it's a sort of Irish pragmatism of saying we have a real problem we need to solve,
and this isn't so much about just being part of an effort to create this global StirShaken
ecosystem that will work really well once we get to critical mass, we need to solve the problem now,
we have people to protect, so let's take action. And what will be interesting is to see if other
regimes take similar steps, and I think if enough of them do, yeah, that probably takes a heck of a
lot of wind out of the sails of StirShaken, but I think that's the part of the real problem with,
there's like two issues that I keep seeing, I feel like, as we discuss StirShaken more and more,
and one of them is just this issue that for it to be super effective, and certainly effective
on a global basis, right, it has to have this pretty strong critical mass of adoption,
you know, that's a little bit hard to get to, so you're selling a promise a little bit,
and when you're selling someone's share in it, you're selling kind of a promise that everyone
else is going to get on board too, even if you don't, which I think is a little bit of a challenge
for it, but you know, the other one is, it's become kind of a political campaign,
right, it's become very political, it has all this like political overtone to like getting
on board with it, and I can't say I've dug into that and really understand it,
but I don't think that it helps, right, because it's sort of like,
it reminds me, quickly, it reminds me a little bit of 5G, where now people are starting to talk
about, well, 5G didn't deliver its promise, and now they're talking about 6G, you know,
SK Telecom just put out a whole paper about 6G, you know, improving on 5G's mistakes and the
things that it missed on, and I feel like StirShaken a little bit was like a lot of energy
behind solving this problem, we've talked before about how it's really only addressing one element
of it, we just talked about how it needs to fulfill a lot more promise, right, to really
come together, and so that's where I think like after a while, you say, man, that's a lot of
steps to climb, it's a lot of momentum to overcome with a fast-moving problem, and again,
now we start seeing governments like Ireland's saying, look, we got to take action now, right,
and those steps, yes, I think ultimately can kill an effort like StirShaken, because it's just
going to run out of momentum on its own lack of relevance at that point.
So these things are political, though, I think you're absolutely right, but these things are
inherently political, it's just the same with Chinese suppliers having influence over security,
somebody has to make decisions, and the question about who gets to make decision can become very
sensitive very quickly when the decision relates to something like, is somebody allowed to make a
certain phone call from one place to another place that tells you this and that, and that's ultimately
what StirShaken's about, we can talk about all the bad calls we don't like and phrase it in that way,
we can talk about the fact people don't like spam, that there's some illegal calls, we can focus on
that, but in the end, somebody's got to decide what's good, what's bad, what's allowed, and what's
not, and that then becomes very political, because maybe the usual suspects get to decide all the
time, and if you're not making that decision for your country, or your country's calls are being
blocked, then you might feel some reason to resist the rollout of this, but anyway, let's not get too
bogged down on that. Lee, the strategy adopted by Ireland, which includes the blocking of inbound
international calls, which present a domestic phone number, looks a lot like strategies we've
talked about before on this show. Do you think that that aspect of the strategy that we've talked
about, and we've talked about, you know, this occurring in the Middle East and elsewhere, and
Scandinavian countries, do you think that is now the new template that regulators will follow
worldwide? I think so, potentially, I mean, it's very cost effective, but you know, if you look at
Comreg, and what they're doing in Ireland recently, right, there's some really cool stuff which
they're coming out there, so they're doing, as you just mentioned there, they're doing the same, they've
adopted the same approach as what we're doing over here in Bahrain, for international call spoofing,
they've also done the DNO, which is the do not originate white list, and these are numbers
which are mainly used by financial service companies, or helplines where people call in,
yeah, they're not numbers which generally call out or originate calls. Now, if they have the
white list, if they see these numbers, they're blocking those as well. So, you know, there are
some very cost effective measures, right, out there, so I do hope that they start to become,
you know, a template for other countries around the world to adopt.
And, you know, do you think that this problem of mobile phones, obviously, the mobile phone,
when it's roaming outbound, it's using a domestic number, but there are ways and means, and the
Irish have set out a very clear roadmap of how they're going to technologically improve over
time, so that even the mobile phone, they're going to be able to, and effectively, I think the big
difference here between the stir shake and universal approach as presented, versus this
strategy of blocking calls, is the stir shaking approach promises, when rolled out universally,
that every call will be authenticated, so you can validate every call, whereas the alternative here
for these international calls that are being blocked, you're not trying to validate every
call, you're just using a really simple control to identify a lot of invalid controls. It may not be
100% perfect, but if you're identifying a lot of invalid controls very quickly and easily,
how much do you care about that additional little quantum that stir shaking gives you?
I think that's, do you not find that to be the most attractive part of the proposition
of what, as I say, countries like Ireland and Scandinavia are looking at now, is it isn't
promising to boil the ocean and be perfection, but it just gets rid of the vast majority of the
bad calls that you're worried about on day one. Yeah, I completely agree with that Eric, now you
just mentioned there about if it's an outbound roamer calling back, that is potentially an issue,
but there's a solution to that, and then all you need to do in-country, just allow each other access
to each other's HLRs, you can then check that HLR and you can see if they're in a roaming scenario,
if they are, you can allow that call to pass, if they're not, then you block that call, it's a spoof
number. Yeah, so if you look at it on paper, it's a very simple, cost-effective solution,
it removes all of that issue, all the headache, all the expense that you get with stir shaking.
Yeah, absolutely, now comments coming in from viewers, another one here from Giovanni Tironi,
the operators make too much money off spam calls and texts, exclamation mark, follow the money,
they are never going to stop it, at least not in the USA, I think you make a good point there
Giovanni, a bit controversial, but we're not afraid to read out controversial opinions, so please do,
other viewers would love to hear your comments and opinions, and I think Giovanni,
you're going to be keen to perhaps pipe in and jump in on our next topic, the next question I'm
going to ask the guys about, because also in the Comreg plan for Ireland, they intend to follow
the approach used in Singapore, which will involve blocking SMS messages that do not match entries
in a national sender ID registry, is it possible, and I'll start with you on this one Lee, is it
possible that smaller countries will take the lead with scam and spam prevention, using techniques
like sender ID registries, using techniques like looking like domestic phone numbers being spoofed
relating to calls that come in from inbound, because they can afford to be more agile when
formulating straightforward ways to stop unwanted traffic from bigger nations, they can use simple
rules, there isn't that much that's domestic oriented, domestic originated spam and illegal
calls, and they can use sensible techniques like effectively a sender ID registry, it's a white
list for business SMS messages, is this not the way forward, is this something that you can see
other small countries just being quicker at adopting this than big countries that get bogged
down in the arguments about how to do these things? Yeah, look, I think you've hit the nail
on the head there, right, the key thing is about being agile and being able to adopt quickly and
adapt to the evolving threats. Now, you know, if you look at these smaller countries like Ireland
and Bahrain, where I am, there's only three or four operators, right, so it's very relatively
easy to get kind of a consensus between them and to move quickly. Now, if you compare that to the
USA, right, you've got about 900 telecommunications operators over there, right, so, you know, then you
can start to see that kind of getting any agreement between them is actually going to be pretty
difficult, right, so a lot of the innovation, it's likely to come from these smaller nations.
Okay, I'm going to throw you a real difficult question now, Ed, apologies, but it relates to
what Giovanni said earlier on, so I have to ask it now. Singapore, sender ID registry,
it's a simple concept, a business has to basically get permission to send bulk SMS messages,
A2P SMS messages, and if your sender ID does not match what's on the list,
tough luck, that message is going to be blocked by everybody, so it's a simple way of cleaning out
a lot of dirt, a lot of muck from the sending of SMS messages in bulk. Singapore's done it,
it can be done, the same idea exists in the USA, where the equivalent registry would be the campaign
registry, but it seems to be bedevilled by the most extraordinary delays in getting it up and
running and protecting Americans. Is there something about being in a big country that
makes it harder to get these things done? I think there's two things about it, I mean,
some of it is bigger, right, and also being, you know, like, if you take the politics out
of it in the United States, it's 50 different states and 50 different state legislatures, and
it's a lot of layers, right, not just layers of red tape, but a lot of layers of people that have
to agree on things, so it moves slow and it's hard, and then I think when you add on that,
the political climate now in the US, where you'd like to think that something like, you know,
blocking, so to the point of doing something like what Ireland's done, you know, if you can turn
around and celebrate that you made a 75% improvement into something like $350 million,
your problem are bigger, it's a huge number, right, you can, that's an easy thing to kind of
celebrate, even politically, to say, hey, we reached across the aisle, and, you know, we had this win,
and it benefits everyone, and you have a political climate in the US now, where that's a sin,
right, working with the other side is a sin, and so that, I think, really, really further slows
things down, to your point, about, it's not just that it's a big, you know, heterogeneous country,
it's also a country that doesn't want to agree on things, that makes anything like that harder,
you know, that requires, you know, some solidarity, and anything like that, harder to move forward,
absolutely. It's okay, guys, you'll have to forgive me, because my next question is super long,
but it includes within it some news that I think is, it's been under the radar, but now it's coming
out into the open, and it's going to become one of the big issues that defines how communications
is managed going forward for the rest of the century. Now, stirshaken, originally missold in
the USA, with the promise it would be used to enhance analytics, that would automatically decide
whether to block a call or not. Now, they got away with that misselling, because it was missold to
the regulators, and regulators never admit to mistakes, but in the end, stirshaken has not been
used at all for blocking calls. However, what we have still is call analytics, and increased
reliance upon call analytics in the USA, in order to protect the public from calls, from bad calls.
Those analytics are imperfect. So, what we get instead of blocking is warnings that are presented
to phone users on their handsets. Typically, a message that would say something like,
suspected spam, suspected telemarketer. So, it's not definitive, it's warning you,
doesn't tell you not to answer the call. There's three companies in the USA that dominate
the supply of this service to determine whether a call is suspected to be good or bad.
HIA, First Orion, and Transaction Network Services, or TNS for short. Now, this year,
we've seen complaints from businesses that say legitimate legal calls have been mislabeled.
A numbering identity specialist, Numerical, very recently filed a scathing submission with the
Federal Communications Commission, the US regulator, arguing that HIA and TNS, though
interestingly, not First Orion, they accused HIA and TNS of exploiting a conflict of interest
by charging telcos to put spam labels on calls, then charging the businesses whose calls are
being labeled to take the labels off of their calls. So, very clearly, a strong argument that
there is a conflict of interest occurring in the USA. But before we focus on the USA,
Lee, let's give you a chance to ruminate upon this problem in general.
Is the decision to put labels on phone calls, instead of just choosing to block them,
a kind of abdication of responsibility by the authorities? Because we know that ordinary people
are not going to answer those calls, but it means the authorities will say, well, it's not our fault.
Don't blame us that a mistake was made. We didn't decide. The consumer decided in the end.
Is that really a cop-out by the authorities?
I think this is actually quite a difficult one to answer, because I wouldn't feel comfortable
as a customer, and also somebody who works in the industry for a telecom operator,
having a third party decide whether to block my calls. Yet, on the other hand,
this is where it gets difficult. I do like analytics to detect spam calls, because you
can automate the blocking process in the background, and you get all the benefits.
We go with that. The problem here with analytics is the accuracy. And it's like, what type of content
constitutes a spam call? So, if it's a call to commit fraud, then that's a clear case to me.
But what about a genuine sales call, maybe to invest in a luxury timeshare apartment in Florida?
The point I'm trying to get across here is, what one person constitutes or considers a spam call,
it may be of interest to somebody else. And it's about how do you reliably differentiate between
that? Now, my feeling is, it's not an entirely or an entire abdication of responsibility here.
I just think it's very complex to implement, and it's probably going to be full of litigation if
you get it wrong. So, they've chosen to label the call instead of blocking it.
Well, you're always on the side of the telco, Lisa. I'm not surprised by your answer.
You're more diplomatic than me. Ed, let's bring you in here. Let's make it specific to the US
communications market now. Do you suspect there's any truth to Numerical's assertion that there is
a conflict of interest where the same analytics firms seek payment for things called brand
management and reputation monitoring services, which effectively means your business is more
likely to have its cause not suffer these labels, whilst at the same time charging fees to the telcos
to put those labels on in the first place? Is there a genuine exploitation of a conflict of
interest there? No, no, no. No, no, no. In America,
that's not a conflict of interest. That's a good business model. What are you talking about? Yeah,
no. Of course, it's a conflict of interest. And it reminds me of the violation of a basic business
control, like when the same person controls both accounts receivable and accounts payable,
and then later is busted for embezzling, and you wonder why. It's sort of the same type of setup
that in any basic business control, no one should control both ends from it and then profit from it.
So I think it's just this, even if there was no wrongdoing, the setup in that nature is naturally
a conflict of interest. But if I take it like one level up, the whole area of digital reputation is
like this awful minefield, this horrible, disgusting minefield to work in. It made
working in corporate marketing really super unpleasant as well, just because so much work
goes into building brand. And the people who work in those things know what I'm talking about,
right? It's an enormous amount of blood, sweat, and tears that goes into doing really simple
things or making small changes. And when someone else, one, can come in and step all over it,
and the digital world is bad enough, like in social media and what have you. But then when
you have someone who declares themselves an authority on digital reputation, deciding
what is or isn't digital reputation, and you have to pay homage to that, that in itself is awful,
right? And it produces bad things in our society. We could have a whole separate show about that.
So yeah, I find not only that example to be a terrible conflict of interest, but I think it's
sort of endemic of the field of digital reputation. There's like nothing
less authentic than digital reputation. I hear you. I'm going to read a couple of comments here.
Well, I don't think we'll ever stop on this one. This is just going to roll, run and run. A couple
of comments here coming in from Pierce Gorman. Thank you for watching, Pierce. He says, he asks,
is blocking cause adequate, or do you need to know where the cause come from so you can apply
measures to prevent the cause from being originated in the first place? Great question.
We'll come back to that. And a sender ID has to be assigned and able to be revoked somehow.
Does any opportunity for spoofing the sender ID? Do we need someone to verify the sender ID?
Stirshaken uses conventional public cryptography and it provides a cryptographically verifiable
sender ID. So, okay. Some of our viewers like stirshaken, not everybody's influenced by my
opinion on that. I respect your point of view on that one, Pierce, but we are going to take
another quick ad break now, although it doesn't feel like an ad break because it's from,
it's from our favorite Texan in the whole world, Jeffrey Ross of Core Authentication,
Fraud Prevention and Geolocation Specialists, Waterloo. Just like last season, Jeffrey and
Ronwu will be taking us around the world in our phone. And this week he leads us on a journey
to somewhere that Jeffrey knows especially well, but I won't give the secret away.
Producer James, please roll VT. Hey everyone, from OneRoute, I'm Jeffrey Ross and this is
The World In Your Phone, season two edition. Let's talk about Texas. Now you might be asking yourself,
hang on, this is The World In Your Phone. Why are we talking about Texas and not a country?
Well, I'm here to tell you. Number one, it's Texas. Number two, Texas was originally its own
country, the Republic of Texas, and still holds the right to leave the United States and become
its own country again. But did you know that Dallas, Texas is quickly becoming the telecom
tech hub of the US? The Dallas-Fort Worth metro area houses several telecom companies with over
800 privately owned telecom companies, and dozens of public companies with North American
headquarters. AT&T, for example, moved its headquarters to Texas in 2008, along with
telecom related companies such as Fujitsu and Cisco System. Texas is the second largest state
in the United States. It uses its own power grid separate from the rest of the US, and the famous
King Ranch is larger than the state of Rhode Island. The state sport? The rodeo. You can also
tour the world in Texas by visiting towns with names such as Athens, Bogota, Dublin, Edinburgh,
Florence, Geneva, Holland, Liverpool, London, Moravia, Moscow, Naples, Newcastle, Odessa,
Palestine, Paris, yes, there's a version of the Eiffel Tower there, Rome, Italy, Tokyo,
and finally, Earth. Be sure to subscribe to OneRoute on YouTube, where you can catch up on
the world in your phone, and watch the OneRoute Roundup to show the Spotlight's individuals and
companies making a positive difference in the telecom industry. One more fun fact about Texas,
the first frozen margarita machine was invented in Texas. Eric, back to you, and more of this great
communications risk show. Cheers.
Well, thank you, Jeffrey. He's a Texan through and through, and of course, Texas was a country
as well. Briefly, let's never forget the Texans once stood on their own, jabbing away, pointing
their finger at the rest of the United States, saying, we're not sure if we want to be joining
you. So, so many things can change in this world, and I think, let's go back to the topic of
reputation, because reputations rise and fall, don't they, over time? And there's so many big
stories, we're not going to have time to go through it all today. Guys, I'm really sorry to
the viewers who are watching. There's just so much to unpack that's happened in the last 11 weeks.
And I think this is another huge story that was always an issue that would come to the fore,
and is now coming to the fore. Reputation, linking it to the reputation of a phone number,
deciding whose calls are good, whose calls are bad, on a global level. Now,
whilst we were away, the privacy activists at NOIB, which is short for None of Your Business,
lodged a complaint in Belgium about Belgian phone numbers being transferred to the USA for inclusion
in a reputation database. Now, NOIB and their lawyers argue it's against GDPR for Proximus
and Bix to provide data to TeleSign, a US-based CPAS provider that was acquired by Bix in 2017.
They said that information about a phone number is personal data, therefore it's covered by GDPR
for Europeans. And the exemption in GDPR for the transfer of data without an individual's consent,
if it's going to be used to prevent crime, cannot apply, because the goal of evaluating
the reputation of phone numbers in general is far too general to satisfy a crime-related exemption.
Ed, is this an example of a mismatch in expectations about privacy from either
side of the Atlantic, clouding decisions about how we are globally going to manage
the international telecoms ecosystem?
So, I'm going to say yes, but I'm going to go one better, and I'm so glad that you
perhaps, and I'm so glad that you asked this, because it's like putting a giant watermelon
on the tee for me to hit with a tennis racket. But I think, once again, the phone number,
the phone number is a problem. And if I get more specific about it, forgive me, because I have
notes on this that help me figure it out. So, I think the real problem is who possesses knowledge
of the linkage between a person and a specific phone number, right? That's really the problem,
is that bit of knowledge of data. And so, it's sort of not the phone number, but rather whose
device you can contact at that phone number that's at stake. And I think that that's like
the root of all the problems that we're seeing now that get into privacy and identity theft and
account takeovers and all that sort of thing. And it's, this case is actually like exemplifies
the root of the problem. It's like light. Is it a particle or is it a wave? You know,
phone numbers are public. You don't own it, right? Anyone can call or text it. But your
temptation is to treat your phone number as private property and as PII, you know,
personally identifiable information. And because so many things we do in the digital world
lean on your use of this non-permanent thing that identifies you, except you don't own it,
right? It's a really, really weird set of relationships with the phone number.
And what's even weirder about it is, right, the entire system, even down to your 5G data connection,
right, depends on a phone number. And that's something that was first invented in the 1870s,
which forgive my language, but that is bat guano crazy to me, right?
And it's a little bit like you built an aircraft carrier and it just keeps leaking and it's taken
on water and you don't know why. And you go down into the depths of the ship and you find that
there's like part of the keel was laid in 1870 and it's rotted wood and there's water coming
through it and you shrug and you're like, well, you know, that's how it works. No.
So again, I think we're going to keep seeing these same leaks happening because so many
things are attached to this fundamentally flawed thing called the phone number that
we are still attached to for crazy reasons. Well, but we know why we're attached to this thing.
It's because we want databases and this is the best global key we can have for the databases.
Something like a social security number. Well, it depends on the social security system.
And not everybody has a social security number and so on and so forth. Even your name,
people have different countries, different cultures have different conventions for naming.
So to apply consistency on names, addresses, all sorts of government ID, very difficult.
But the telecoms system that had to come up with a global mechanism for attaching a code
to every single phone. And it worked. Great for making phone calls. Yes,
for making phone calls. It works great. Yes. And now we're going to have made by people who want
to steal your identity and all your money. And now we're hanging everything off it. We're hanging
everything off those phone numbers. Absolutely. So Lee, I know you tend to be on the side of the
telco, but I'm going to test you here on this with this question. Do you side with the privacy
activists who want to block the transfer of data from Europe to the USA? Or are you on the side of
businesses wanting to collate more data about calls that were previously made from the same phone
number? This is tricky, Eric. So. Well, yeah, that's why we got to discuss these things.
So look, I'm all for preventing fraud. Right. But I'm not too sure that I support something
like this. Right. I do have a few concerns about this particular line. Who would have access to
that data? You know, what kind of data are they taking? You know, what's getting transferred back?
Is it my location ID within CDRs? But then about, you know, if we just talk about voice calls,
what about data? Right. Because data is also that's the other side of the coin.
Right. So is this going to expand into data? Are they going to be looking at my browsing
history, my approaching history, this type of stuff? Yeah. So, you know, some other things as
well is how is that opinion formed? Right. You know, what is it based on? Right. So you'd need
to have some kind of explainable AI in there. So we know exactly how and what factors are kind of
influencing the algorithm. And I think you quite rightly mentioned that you need to have an appeals
process. But like if you was wrongly labelled as a scammer, right.
But how would you know who to appeal to, Lee? How would you know who to appeal to? Because if
data is transferred to databases, we're not talking about a single, you know, let's not get bogged down
in a single example here. If the principle can be followed in practice, there could be who knows how
many databases there are. And let's just take a real simple example. One thing they'll definitely
transfer the data on and it relates to something you talked about last season. Now, you made quite
a cheeky comment that perhaps some of the female viewers of this show may not have been appreciated
that men don't talk as long on the telephone as women. You even presented us with a graph.
Well, your phone number, they're going to look at the average duration of telephone calls from
your phone number. And if you're true to your word, Lee, you're the kind of guy who goes,
hey, how are you doing? Goodbye. And you do a quick call. You are more likely to end up with
that label that says suspected spam because that's the kind of thing they're looking at.
How would you even know who to appeal to, Lee, if that happened? And what would you be able to do
in terms of protecting yourself from your call not being picked up because somebody's attached
a suspected spam label to the calls you're making? Yeah, well, you know, if you're going to do
something like this, then it's about establishing that process. And maybe you can complain to a
regulator somewhere who would then investigate that for you. But look, Eric, I don't think
this is going to fly. It won't fly in the EU. The EU are not going to accept this, right?
Well, I don't want to speak on behalf of Americans. I don't want to speak on behalf of Ed.
But there's a heck of a lot of American businesses that are going to be trying to make money out of
this. So the idea it's not going to fly, it's going to occur. The databases will be compiled.
And to some extent, they will comply with laws in the companies where they're being compiled.
And the law in the US is not like the law in the European Union. So I think when you say
the regulators are always going to get involved, they might get involved, Lee, 30, 40 years down
the line. And in the meantime, we'll all be suffering because they're not on top of it now.
This is why this is an issue right now. The data transfer has already been taking place.
That US business was already purchased by a European business. And the sweet spot in the
argument for them is they can say, the US, we've already got this track record of defeating
transfers of data from the European Union to the US in general, just because of the way the US is.
But they're not going to be satisfied with that. They don't want to have any databases whatsoever.
But we know that those businesses are out there. Their business strategy is to do it,
and they'll do it. And they'll try and find a way to legitimize it afterwards.
Love to have comments from the viewers on this point. I think this is going to run and run,
but I'm conscious of time. So let's fit in a few more topics as well before we wrap up today's show.
Hacking. Ed, I know you feel strongly about this one. So let's talk about hacking a little bit.
We know that a Russian comms provider, a satellite comms provider, was hacked through
their cloud services. We had Carsten Noll on last season talking about cloud being a source of
vulnerability. And lo and behold, a Russian comms provider was knocked offline as a result of their
cloud services being hacked. We also had more recently Anonymous Sudan. So that's an Islamist
hacker collective that many say is backed by Russia. Recently launching denial of service
attacks on networks and systems in Nigeria, because they said Nigeria was opposed to the
pro-Russian coup in Niger. And in Kenya, because the Kenyan president, William Ruto, criticized
the generals who had taken control of Sudan. And that had some knock-on impact on the availability
of mobile money services. So considering what we've discussed about connected cars,
about cloud platforms being attacked, should we be relying less upon networks in general?
We tend to work in a sector where the assumption is more networks is always better.
Is there an argument that the hackers are just going to get worse and worse and exploit things
more and more, and that maybe we should take some things and take them offline,
so that when some Kenyan is trying to pay their bus fare, they're not being able to stop to get
their bus fare or pay their groceries, because Anonymous Sudan has knocked them offline.
Are we too networked in general, Lee? Yeah, I think we need to take a step back
and think about what it is we're actually connecting to the internet and ask ourselves,
is this really necessary? Because if it's connected, then it can be hacked. And if you
look at some of the bigger hacks in the last few years, the access has been gained via some really
obscure IoT devices. So in the case of Target, where there was hundreds of millions of personal
data and credit card information was stolen, they actually gained access through an air conditioning
unit. Then you had a casino in America. They got hacked through a networked thermometer in a fish
tank. And now you've got all these Toyota cars that are being stolen by hacking them through the
headlights. So I do think we just really need to think, and you've got to ask yourself, do I really
need to connect this device to the network? Yeah, but our businesses love to connect devices
to the network. We love it, love it, love it, because we make money from the network. And so
this is why I know this is going to be controversial in a show like this. Ed, do we just sometimes
lean too quickly into networking everything when the whole pace needs to slow down?
Yeah, I agree with Lee. And I agree with that point. I think we're networking a bit too much
too soon without recognizing the problems, like acknowledging the problems that are being created
and addressing them. Even if that's along the way, we're moving at a pace probably even faster than
that permits. And I think that part of what that contribution is too, is remember, we're going
more and more to software-based networks. And I'll never forget, like even this is back in
2007 and 2008, I worked on a book with Richard Lefebvre, who was the CIO of Sprint Nextel at
the time. And one of the things that stuck with me that he said when we were working together was,
as a CIO, or again, this is 2008 terms, this is pre-cloud, pre-cloud native stuff. He was saying,
you know, as a CIO or a CTO, you love hardware boxes. The light's red or it's green. If it's
red, you replace it with a green one. You're good to go. Software's black magic. It keeps you up at
night, you know? And now the whole network's going to be black magic that keeps you up at night.
And so I think that's part of the risk happening too, not just that we're networking too much,
but that we're maybe software networking too much. And the thing I would say,
uh, also like for people's kids, right, is, um, not for kids, for, uh, for people's protecting
their kids and protecting their own selves, right. And going out and operating every day
in the digital world is like, remember that, um, you know, like Hollywood's not helping us.
Hollywood always wants to make it seem like it's some master tech genius with skills and
knowledge that most people don't have almost like black magic, right. That can go in and take things
over. And yet at the same time, you're sharing an awful lot of information about yourself publicly.
And that's actually what's being used against you. Right. And I think that's one of the key
things that people have to remember is that it's, it's the simpler hacks that are actually
more common than the things that are more of the risk to a person and not
the super complex computer whisperer ones.
There's too much to talk about as always. Um, we're going to have to, we're not going to have
time to cover everything in terms of the news that's happened over 11, last 11 weeks. And
for the viewers, just to reiterate, our new format is going to be an hour, 15 minutes long.
So we've still got 10 minutes left to go to the show, but we're going to struggle to even get this
final topic covered and the time available as we skip for, but I really wanted to cover this topic
because I think it links to so many of the themes that we've already discussed in the show. Now,
again, hot news or recent news, the U S cyber safety review board, which is a section of the
department of Homeland security recently issued a report in some methods used by the lapsus gang
of hackers and ransom takers. Their list of recommendations begins with a call for the urgent
transition away from using SMS and voice calls to transmit one time passwords, a favorite topic
on this show. We had a lot of people, a lot of security experts at the beginning of this year,
wailing that we need to get more people using two factor authentication with SMS after Elon Musk
removed it as an option for non-paying Twitter users. Ed, given this report from this very
highly esteemed group of people who work for the cyber safety review board, were those experts
allowing their negative feelings about Elon Musk interfere with their judgment about how we should
be solving the problem of authenticating people going forward? I think so. And that's a rapid
fire on this one because I know we're short on time, but I think they were popping off against
Musk a little bit because you know, SMS has been deprecated for a really long time. So it's not
new news, right? It's kind of like re-announcing something that's already news. And the disappointing
thing is I think the Musk matter was probably a chance to talk about, to bring like experts and
non-experts together in a productive way to talk about 2FA and how to make it work, you know,
on a bigger basis. And that didn't happen. It became this sort of useless spat instead. And so
I think part of it is that, you know, there isn't really a great education on how crimes are being
committed with that technology and how that contributes. I don't really think that's central
to the conversation, which is really foolish. And I think we probably need to teach it more on the
show, you know, just keep getting the message out there about connecting those dots very,
very specifically. Yeah. Yeah. We're doing the best we can. I'm sure we've got plenty of people
watching on LinkedIn and people watching on Twitch. Always glad to have your comments, Lee.
Is the tide finally turning on the use of one-time passwords or are we still going to see them,
their use being expanded in future? Yeah. Well, you know, they were banned in Malaysia and now
the Americans are waking up to it. But apart from that, I'm not really seeing any other countries
which are looking to take them out. I'm sure a couple of countries at the moment, they're
kind of watching to wait to see, you know, what kind of replaces it and is it successful? But,
you know, for me, SMS is insecure, right? I find the whole OTP thing, I find it really annoying.
It's not seamless and it costs money to send, right? So as a user, I prefer my authentication
to be something that's done in the background, right? Some kind of silent authentication or use
of biometrics. But I think across the world, I think you're going to see like A2P, OTP,
I think it's going to increase. And then it might go the other way, depending on which countries
kind of adopt or move away from it. Now, the Cyber Safety Review Board, they also commented,
and I love this, I'm going to hold my hands up and say, I'm really pleased that they said this.
They commented that a whole of society approaches needed to discourage juveniles from engaging in
the kinds of simple but effective criminal techniques involving SIM swapping, bribery,
extortion, the kinds of things we've been talking about every single show we've been doing. Now,
in some ways, their recommendations are quite like programs that have been created,
designed to counter the appeal of extremism to young men who may otherwise become terrorists.
Ed, question for you. Is hacking really now like a cult in the sense that it starts small,
watching fake news, YouTube videos, joining a Discord server about how to steal a cool username
for your favorite video game, but then escalates as impressionable people connect with each other
and become ever more dedicated to pursuing their goals? Yeah, I mean, I think it's, I think,
you know, hacking was already like a pro sport for a while. It had legends, it had like its own
Hall of Famers practically, and, you know, legendary things that had happened, and it had
its own status hierarchy and skill level hierarchy and all that kind of thing, kind of like a,
you know, kind of like a sport does. And then the pandemic hit, and that's like Sardaugard
training for teenage and preteen boys. You're home, you're locked, you can't go anywhere,
you have access to this. The one thing you can do is communicate with all your friends and be
online. And so now, what else do you have to do but create mischief online and share all that
information? I mean, what a perfect environment, you know, to train up a bunch of hackers. And I
think that that's exactly what happened during the pandemic, and probably related to, you know,
the huge rise in all the numbers we've seen on these different scams. And so yeah, I think that
taking a societal approach to it probably does make sense, because it is this sociological
outgrowth of the pandemic that you can't look past. I mean, it was already a thing,
and I think it exploded, you know, even more so into like a cultish level that you're talking
about. And again, it makes sense, because it's a digital thing that exists in digital space,
and is benefited by digital communication and replication of solutions that can be shared
digitally, right? And no wonder, right? It's not a surprise that, you know,
pan butter churning hasn't taken off quite as fast.
I hear you. I hear you. I think what, you know, you just sparked a thought in my mind there,
because it's about heroes and antiheroes, you know, a guy like Julian Assange,
you know, he was already a hero in some domains with his hacker alias Mendax. So,
you know, it's possible to become a hero in that domain, as well as be seen as somebody who's just
being destructive. But there does seem to be this increasing attraction. And I know that you don't
like this when I say this, Lee, but it's boys. It's not girls. It's the boys that are getting
attracted into us. It's a kind of weird nexus of the Andrew Tate kind of macho lifestyle. If you
can have the sports car, you can have the girl, you can have it all quick and easy in a very young
age linked to this kind of, you mentioned Mr. Robot, this kind of fabulism about what can be
done. Whereas actually it's pretty straightforward techniques. They're not doing some crazy hacking
technique. They're doing a SIM swap. They're getting on the phone to some contact service
person. Maybe they can sweet talk them into making a mistake. Maybe they just give them a bribe.
They just give them a bribe. So, Lee, my question to you, do you agree that we do actually need a
whole of society approach to influencing boys and stopping them becoming hardened criminals
who end up deserving these lengthy prison sentences we're now seeing being handed out
to some of them? Yeah. You know, when you've got children as young as nine launching DDoS
attacks and some surveys show that children aged between 11 and 14, that they're more likely to be
engaged in hacking than they are smoking and shoplifting, then something needs to happen,
right? So, you know, I know in the UK we've got the National Crime Agency, right? They've partnered
with primary and secondary schools to educate young children on the consequences of being
involved in cybercrime. Now that program that's called Cyber Choices. However, I don't think
that's part of the national curriculum. It's a referral based scheme, which I think stops a bit
short. So personally, I think the UK government should, they should enforce all schools, right,
to do it, right? I mean, the stats are showing that cybercrime is getting worse amongst that
particular age group. So I think what you're saying is you'd be much happier if your sons
get involved in the smoking and the shoplifting that you used to do when you were a lad,
as long as you keep away from the computer. Is that what you're saying, Lee? That's all we had
time to do is go behind the bike shops and have a crafty cigarette. We didn't have any
computers in those days, right? So in other words, let's get the kids smoking cigarettes again and
get them off the computers where they do. I know we're just joking. We're running out of time here,
guys, but I have to say I've missed doing this show. Thank you so much, guys, for being on the show.
We're out of time, but I've loved every moment of conversation. I have missed it and I'm really glad
we're back here together, guys. There's no more time, I'm afraid. Viewers, thank you for the
comments. I tried to read out as many as I could in the time we had available, but that's it. That's
all the time we've got available for today's show. Ed, Lee and I will return next Wednesday when our
guest will be Philippe Millet, chairman at the i3 Forum, previously a long-serving senior manager
at Orange. He'll be talking to us about the creation of the One Consortium, a new global initiative
that aims to harmonise rules and standards for coal validation so they can be applied consistently
to international calls. We'll be live on Wednesday 30th August at 4pm UK, 11am US East, 8.30pm India.
Why not save the show to your diary by clicking the link on the Communications Risk Show webpage,
or really, my word, if you've watched the last season, you must get tired of me saying it,
but I'm going to keep on saying it to you. I'll do it. You should really subscribe to Communications
Risk Show broadcast schedule and have the details of every episode added to your diary automatically
in the right time zone for you. Thank you, my wonderful co-presenters. It's great to be on this
show with you again, Ed Finegold and Lee Scargall. Thanks also to the hard-working producers of this
show, James Greenley and Matthew Carter. You've been watching episode one in the second season
of the Communications Risk Show. I'm Eric Priezkalns. Remember to visit the Communications
Risk Show website,, for recordings of every episode from last season.
Keep reading for the latest news and opinion about risks in the comms industry,
and go to the Risk and Assurance Group,, for access to
RAG's free services and content, including RAG's comprehensive leakage and fraud catalogs.
Thanks for watching, and we'll see you next Wednesday.