Welcome to the 105th instalment of the Kaspersky Transatlantic Cable podcast, where Dave and I take a look at some important stories that you may have missed this week.
We begin by looking at recent news on robocall-blocking apps. Although they may try to curb these phantom calls, there is more to them than meets the eye: Some of these apps are actually sharing data with third parties.
After that story, we turn to the latest big business to be caught listening in on supposedly private recordings. This time it is Microsoft, specifically contractors with Skype. From there, we move on to the new security cameras that are using facial recognition in King’s Cross. For those of you heading off to Hogwarts, you may want to use a spell or two to keep your privacy.
After that, we talk about yet another “whoops we’re sorry” — this one from Twitter, personal data, and advertisements. To close out the podcast, we look at intrusion by so-called warshipping.
If you enjoy the podcast, consider subscribing and sharing with your friends who need more regular updates on security. For the full text of the stories, please visit the links below:
- Robocall blocking apps caught sending your private data without permission
- Revealed: Microsoft contractors are listening to some Skype calls
- King’s Cross developer defends use of facial recognition
- Twitter says it may have used user data for ads without permission
- With warshipping, hackers ship their exploits directly to their target’s mail room
Jeff: All right Dave, to kick things off this week, this one kind of hits me close to home because the robocall endemic in the US is way beyond fun anymore.
Dave: We don’t have too much — well I say that. Personally, I don’t have too much of a problem with it. But I do know friends and people who have problems with — we don’t call them robocalls over in the UK. I think it’s just like spam calls.
Jeff: That’s what I usually call them too, but I use four-letter words in front of it. I’m not sure I can say that on a family show, but it’s definitely something that’s not a fun and happy conversation, you know. And to be honest, I get at least like six of them a day. And I usually try to get a person on the phone because I can have a lot of fun with it. And it just becomes like me trolling a person and them calling me some kind of blank American because most of these people are not based in the US. And yeah, it’s a lot of, fun but I had one in Japanese yesterday that was outstanding. I thought I had Yuko on the phone. My Japanese is very limited.
Dave: I don’t know any Japanese.
Jeff: Konichiwa, ni-hou.
Dave: Oh, yeah. Well, I know that. Yeah. But now, going back to the story, it’s talking about robocall-blocking apps caught sending your private data without permission. So it’s an interesting one, isn’t it? Because obviously, these apps are designed to try and protect your privacy and protect you as an individual phone, theoretically, but theoretically that researcher has found that a lot of these apps have been caught red-handed, so to speak, sending personal information to various third parties, one of them in particular, my favorite, was sending information to Facebook of all companies, before you even sign up. That’s amazing.
Jeff: What a world, what a time to be alive. And who woulda ever thought Facebook would buy data?
Dave: Sure, they’ve got enough data, right?
Jeff: It’s never enough data. Come on, you know that.
Dave: But I think it’s quite interesting, because a lot of these apps are running in the background, a lot of the time actually, these apps aren’t actually installed on your phone. There’s information between a telco provider and the app, and they work almost in the background, so that you don’t have to install anything, you can install these apps on the side as well, if you want, but I think that just goes to show that this is a really gray area when it comes to privacy and user consent.
Jeff: I think that the problem with a lot of this is these telco companies are operating in that gray area, if you will, but you know, they look and see things with green, and where can they make a profit off of blah, blah, blah. And a lot of times it’s user data, like there’s a number of companies in the US now that are like being yelled at by our Congress to stop doing as much bad stuff with data or selling user data or storing it. And I think when you look at this, it’s just an extension here of these types of things. And, you know, these apps, just it’s, it is what it is, I think you have to realize what it is that you’re using. But the problem is that, you know, as the researcher says, that privacy policies are great, but apps need to get better about abiding to them. So when you think about that, it’s the whole area of you’ve built an app, and now how can you monetize it, and a very easy way to monetize things typically is through that user data. And knowing that even here in the Facebook, someone buying it, you’ve got a really quick ability to say, Okay, well, this cow is kind of fat. Let’s smoke it. So it’s just one of those things that, you know, like the robocalls or like the spam calls, this is just an opportunistic area where, you know, there’s going to be that gray area, as you said.
Dave: Yeah, I do find it kind of funny that there’s spam callers who are trying to make money off you. And now this company, legitimate companies are trying to make money off spam calls. So it’s almost like we’ve come full circle. We’re seeing people make money off people trying to make money.
Jeff: No honor among thieves.
Dave: So yeah, I think it’s just something that and you’re quite right, the researcher says that privacy policies need to be updated. And we do talk about privacy policies a little bit through throughout this podcast, though, it’s kind of a running theme almost. But moving on to the next story, still kind of privacy related. And privacy policy related is talking about how Microsoft — this is vice.com — is talking about how Microsoft contractors are listening to some Skype calls. And I think this comes shock horror, this comes off the back of Google and Apple with their respective AI programs have been caught, I wouldn’t say red-handed but have been caught sending transcripts and audio to third parties for them to translate. And lo and behold, Microsoft have been caught doing exactly the same thing with Skype. Right?
Jeff: So yeah, you know, the whole thing is, this doesn’t surprise me at all. Let’s be honest here, like you’ve got stuff that’s on there. And, you know, let’s first start to go back to the root of the issue. And like one of the things that Skype offers in its app, is the ability to give real-time translations. And you know, that that’s a cool feature that a lot of people heralded when it came out, that AI could actually help make things easier. Like, if you think about it, you know, for us, for example, Dave, you know, how many calls are we on or the team like when we do our monthly call, you’ve got people who are speaking Japanese, people who are speaking Turkish, German, we’re speaking Russian, German, French, sometimes Italian, Spanish, Portuguese, and now you look at it. You know, this is one way if you were using Skype for it, you could get you know real-time translation and have people understand in their native tongue in case something was missed in the English, so that could be really good. But the problem is to train an AI what you need is the ability to be able to test it and see how it’s working. And human feedback only goes so far, like come on. How many people listening to this podcast actually click those star buttons after every Skype call. How would you rate our call?
Dave: Not right now, please, not right now.
Jeff: I rate it as poopy every time because it’s Skype. And it’s why I don’t use it as much.
Dave: The five star thing that comes up every single call really annoys me. And I don’t know why.
Jeff: Liked when it used to give me feedback and be like, yo, call just dropped. I heard a random person dropping in on the call and …
Dave: Yeah, yeah, things. But don’t you think it’s kind of funny how, you know, you get a lot of these big developers Microsoft, Google. Amazon. Apple.
Jeff: Also, I’m sure Facebook’s going to come in with their portal, too.
Dave: They will this one, you know, and they will do all espousing AI and you know, these mystical, this massive giant computer in the sky.
Jeff: So they kind of sound like a marketing from many cybersecurity companies.
Dave: Yeah, exactly that, but they actually opened the AI door. And it’s just a few people they’re typing away in a computer saying no, not that, that. So it’s kind of funny, isn’t it? How all the marketing and PR and stuff that goes behind it, and it goes in front of it? Sorry. And then actually behind it, we’ve just got a bunch of contractors translating things for these AI tools. So you know, we’re not quite there. When it comes to real AI, are we?
Jeff: What is AI, really? And I think that’s, you know, let’s get past the marketing BS. But, you know, when you look at this type of thing, the problem that I think a lot of the security watchdogs have said about this, and that, you know, privacy watchdogs are saying the same spot that the policies that Microsoft has in place for Skype aren’t necessarily 100% transparent about how this data is being processed. And I think this is another story that’s in the long line of, you know, be careful what you’re using or what you invite into your home. So it’s almost like if you invite that vampire into your home, they’re going to come in, if you feed that mogwai after midnight, you’re gonna get some bad dudes in your home that are going to be popping off. And then jumping in a swimming pool and multiplying.
Dave: Gremlins reference. I love that film. Not the second one, though, let’s pretend the second one never happened.
Jeff: The second one I watched in the movie theater, I was like, 10. But you know, again, the first one was dope. But I think when we look at this, this whole area, it’s something that shouldn’t be surprising with, with Microsoft here, because we’ve seen it with everybody else that’s come up.
Dave: Yeah, definitely. And like you say, I think you know, Facebook’s portal, which seems to have gone quiet recently, the whole screen thing. Yeah, they kind of interesting, but I think you’re quite right. You know, it talks about AI and things like that.
Jeff: I only want it for meetings with Serge.
Dave: Yeah, that’d be interesting, you two on one of those things.
Jeff: It just it just makes it so special is Faraday cage. So I think it would be it would be very fun to have one but I’m not sure that they’re going to let me expense the 300 bucks for a pair of them. I’ll ask them again.
Dave: Shall we jump over to the next one? Time’s ticking along, isn’t it?
Jeff: Yeah, this one I’m hoping doesn’t come to bite me this week.
Dave: Although you know going through Kings Cross. So I think you’re all right.
Jeff: I think I might because I think we’re going to dinner by there.
Dave: Okay, so yeah, the story is, it’s on the BBC, Kings Cross developer defends the use of facial recognition. So for anyone not over in the UK, who doesn’t know about this, Kings Cross, pretty famous for Harry Potter 9¾. Yeah, everybody knows about that. It’s actually real. If you go there, there’s a 9¾ there.
Jeff: Well, how the heck are you supposed to know? How the heck are the muggles supposed to know who the wizards are without some facial recognition?
Dave: But the story talks about how developers have installed facial recognition software inside the CCTV cameras there and they’re likening it to a public safety. They’re saying that he’s all about public safety. But the age-old problem of user consent comes in here because people aren’t giving consent to the developers to you know, do this facial recognition. CCTV is different because it’s just blanket, there’s no actual facial recognition going on. So is it another step down? Big Brother territory?
Jeff: Yes.
Dave: … or are we just overhyping this whole thing. You think so?
Jeff: I think the whole problem is that you guys have camera envy of Singapore. At the end of the day, you guys are trying to measure who can get more faces on cameras, right? And like, you know, and we’re in the age of where China’s using the social ratings on stuff or experimenting into it. You know, it’s not surprising that it’s here. I think it’s just another piece of technology that makes surveillance easier, in a way. I think with the way London is with the video surveillance everywhere. People shouldn’t be surprised. And I think it’s, I think in one way, you know, I’ll sign with the developer here that says, you know, this is an easier way for them to help if police ask for the CCTV information. And it probably helps with the dragnet if somebody is looking for a person of interest. Yeah. And it is, like you said, it’s a very popular area. So you know, when you’re looking at it, you’ve got a lot of families going through there a lot of potential for kids going missing, so I could defend them in this one. And I think I only defend them in the caveat that it’s London. And let’s be honest here, outside of taking a pee there’s probably a camera watching you almost everywhere.
Dave: Yeah, I think the whole thing with cameras and the UK in general, because it’s not just London, we have cameras absolutely everywhere. I think one of the problems stems from more a historical thing about the IRA, and the bombings that we had back in the 80s and early 90s.
Jeff: We take security very seriously.
Dave: Yeah. And it was it was a it was a tough time. So obviously, you know, CCTV cameras were put in place as part of that to try and defend against that. So
Jeff: We’re not gonna start singing U2 songs, are we?
Dave: No. Please, no. So, I kind of understand that. But at the same time, this is this is definitely a step down Big Brother territory. And a lot of people who use Kings Cross as well aren’t going to read this story and I’m just going to be unwilling participant inside of this. So it’s just one of those difficult areas that relates to our podcast.
Jeff: So I think the next one the next story for this week, kind of talks about I think we’re going to be on our third story in a row of like, data being used incorrectly. We’re going to talk about Facebook’s not the only social network that used user data for ads without permission. Uh-oh, Twitter did it.
Dave: Yeah, yeah. This one’s from Reuters. And it’s just it’s a really pretty short story. I think. Even Twitter’s still looking at this at the moment. So they’re basically saying that they mistakenly used personalized data if I’m reading this correctly, when they shouldn’t have.
Jeff: There’s no air quotes in there. I really feel there should be. I really am upset there’s not a use of air quotes in the story because …
Dave: They also didn’t say we take user privacy seriously. They missed missed a trick there.
Jeff: I think they’re smart enough to know that everybody knows they don’t take user privacy. Seriously social media, just like that. Just say don’t take hate speech seriously.
Dave: Well, that is a story for another day. I think it’s certainly something.
Jeff: It’s not surprising. Users here shouldn’t be surprised. Lo and behold, if you aren’t paying for the product, you are the product. You’re going to be monetized by the company. And I think that’s easiest way to summarize it and, Twitter, welcome to the shame party with Facebook.
Dave: It’s not the same without them. So shall we jump over to the last story? We’ve gone through these at breakneck speed. So this one’s over on TechCrunch again, and it’s talking — I like this story — it’s talking about something called warshipping — and whoever thought of that name, hats off to them. It’s great, great name. Warshipping hackers ship their exploits directly to the target’s mailroom. I’m surprised this hasn’t been done already, to be perfectly honest.
Jeff: Well, I think to be honest, like the story is interesting, but it’s also similar to one that, you know, came up with SAS this year. One of the stories is about how you can slip microcomputers, like in a Raspberry Pi into devices to infiltrate corporate networks. Now a lot of these require physical access, like the ones shown there, were using a typical USB keyboard or USB mouse and had a Raspberry Pi hidden into it that was able to kind of sniff out some things and send it back to an attacker. This story’s really cool, because it’s an interesting, you know, way for an attacker to infiltrate a system by sending a Raspberry Pi with a 3G modem along with some other components inside a piece of cardboard, or like a toy or something like that to infiltrate a network that’s able to start sniffing out passwords and things like that. And it really just goes to show that, you know, I think we saw stories this year about Raspberry Pis being found even on NASA.
Dave: Yeah, I think I read that story and I wonder as well, a lot of it stems back from what’s the name of the hacking TV program, Mr. Robot. And I’m sure in season one of Mr. Robot, they use a Raspberry Pi to infiltrate a corporate network. So that kind of foreshadows things that we see later on. And I think this is kind of the natural progression of things like this. I mean, Raspberry Pis. I’ve got one right here. And it’s, it’s like us, what couple of inches?
Jeff: It fits in the palm of your hand.
Dave: Yeah. And they they’re so like, power light that you can you can power them with a USB battery and things like that. So, you know, this is this is where we are now. And it wouldn’t surprise me if you know, five years in the future, when computers are even smaller than this and probably the size of that nail or something that we have you have more problems with things like this. It’s a worrying trend we’re seeing and I think it’s easy for us to kind of get excited by it in some ways, because, you know, it’s another evolution of cybersecurity and sort of gray-hat hacking and things like that. But where does it end?
Jeff: I think one of the areas that we saw in the SAS presentation that was really brought up was can you start to protect things? And can you educate key people in like the mail rooms and things like that, or even with the technology policies to start to look for these types of things. Now how you do it, in this case, are you gonna X-ray every package or is somebody really going to see especially if something like this is hidden in a doll, you think she’s a talking doll?
Dave: Yeah. And I think that’s where the danger is for enterprise,
Jeff: I think it’s really just a matter of, you know, having good security type solutions, and making sure that all holes are buttoned up. And, you know, if you’re not doing that, you know, make sure that you’ve got some type of resources in place to either have your own team testing against it, or attackers going against defenders, those types of things. So you know, this is this is interesting, because they don’t show the proof of concept, because it’s part of a testing type of thing that IBM does. But it’s definitely something that’s pretty interesting here, because it is a quite ingenious setup, and one that cost about 100 bucks to get into a network. And to be honest, like how these things can be addressed, if you really think about it, the package can be addressed, like into a mailroom, but into something knowing that the package won’t get opened for a while because it might fall into a big company, it might just fall into an area of, Hey, this is a new hire that we have to wait to get into the system.
Dave: That’s what I was thinking as well. Because my first thought was, how would they ensure that it’s not opened? So they could perhaps, like, do a bit of fishing around and find someone who’s gone on holiday and then post this item to them and someone or just leave it on the side of their computer desk or something? Right.
Jeff: I do like name warshipping, though
Dave: I’m not quite sure.
Jeff: Just reminds me a battle. It just reminds me of battles.
Dave: Yeah,that’s what I thought when I first read it as well. And I thought it was it was he sending me this.
Jeff: But it is literally a Trojan horse, if you think about it. And the real sense of the imagination. This is a Trojan horse, and is the actual —
Dave: — except for 700 Greeks inside it. So that would be a lot of people inside of a very small box.
Jeff: You know, I’m pretty sure though, if you wanted to put, you know, you could probably print something — you could probably put a sticker of some Greeks on there. Or possibly put, you know, a condensed version of 300 onto it.
Dave: Yeah, there we go. That’s how you get 100 Greeks into a Trojan horse these days.
Jeff: All right. And that’s a good place to end on because we’re going to go down a slippery slope. So, guys, this week’s edition of the Kaspersky Transatlantic Cable podcast has come to an end. If you like what you heard, and haven’t subscribed, please subscribe. If you liked this for a while and support the podcast, please give us a good rating on your you know, your favorite podcast app. And just remember, sharing is caring. So if you’ve got somebody who could use a little bit of cybersecurity in their daily life, please share with them. And if you think we got something wrong, please let us know @Kaspersky on Twitter. And if you think there’s a story we should cover hit us up there as well and we’ll look to get to it and future podcast. So until next week, have a great one.
Dave: Bye-bye.
[Automated transcription lightly edited]