Radio Kempe
Radio Kempe is here to connect you with the information you need to tackle current issues. Join us as we talk about difficult topics. Help us as we test assumptions to challenge traditional ways of thinking. Get curious, tune in, and join us on the journey to prevent child abuse and neglect every month of the year! Do you have a topic that you would like to hear from Radio Kempe? Email us at kempe.center@ucdenver.edu and let us know.
Radio Kempe
21st Century Child Abuse: A Conversation with Julie Inman Grant, Australia’s eSafety Commissioner
Host: Ernie Allen
Recorded: July 2, 2024
Julie Inman Grant heads the world’s first government regulatory agency committed to keeping its citizens safer online – particularly its children. She uses civil powers to takedown illegal or harmful content, including child sexual abuse material, pro-terrorist content, image-based abuse, cyberbullying, and others. Julie describes her approach “Safety by Design” which balances user safety, particularly child safety, with new technology by building safety into their products. What can we learn from Australia’s pioneering efforts and how can we expand this approach worldwide? Join Radio Kempe for an interview with Australia’s dynamic e-Safety Commissioner, Julie Inman Grant, as we open our eyes to the new risks and harms to children in this digital age and work toward real practical solutions.
00:00:00:0 - 00:00:21:1
Unknown
Welcome to Radio Kempe. Today is the latest podcast in the series 21st Century Child Abuse. I'm Ernie Allen and I will be your host today. I spent many years in the fight to keep children safe. I'm an advisor to governments, law enforcement, technology companies and others, including
00:00:21:1 - 00:00:24:9
Unknown
Kempe Center for the Prevention and Treatment of Child Abuse and Neglect.
00:00:25:0 - 00:00:55:9
Unknown
Today, our guest is Julie Inman Grant, Australia's eSafety Commissioner. Julie leads the world's first government regulatory agency committed to keeping its citizens safer online. Prior to her appointment as his safety commissioner, she had extensive experience in the nonprofit and government sectors and spent two decades working in senior public policy and safety roles in the tech industry at Microsoft, Twitter and Adobe.
00:00:56:0 - 00:01:30:3
Unknown
The Safety Commission in Australia is the first regulator of its kind in the world and focuses on protection through reporting and investigations. The EU Safety Commissioner has a range of civil powers to compel takedown of illegal or harmful content, whether it's child sexual abuse, material, pro-terrorist content, image based abuse, cyberbullying or similar problems. Julie Your pioneering role is attracting attention worldwide.
00:01:30:4 - 00:01:37:5
Unknown
Tell us what you're doing and the impact you're having.
00:01:37:6 - 00:02:11:2
Unknown
Well, well, thank you for saying so. You know, it's obviously an honor and a privilege to be in a role like this. And your your listeners may have noticed I don't have an Australian accent, and that's because I was born and raised in the United States. And after five years with Microsoft in Washington, DC, in the era of the 1990s, I fondly referred to as tech policy ground zero when we were really trying to shape the Internet around the Communications Decency Act.
00:02:11:3 - 00:02:35:5
Unknown
I was right in the midst of the US DOJ, Microsoft antitrust trial. That was a trying five years. So they sent me to Australia and I've now been here for about 24 years. And again, quite an honor to be appointed as you and the world's first, as you said, Air Safety Commissioner. So we've had to write the playbook as we have gone along.
00:02:35:6 - 00:03:02:7
Unknown
And what's really been nice, you kind of feel like you're at the front of the peloton going up the mountain with no one drafting behind you. And again, no real mapped in terms of taking where you where you need to go and in some ways we've got some very tough adversaries in terms of the predators that will find ingenuity of ways to creatively misuse technology.
00:03:02:8 - 00:03:34:4
Unknown
And while I wouldn't always call them adversaries, we're also fighting against that, a very entrenched, powerful, wealthy and sometimes stealthy technology industry, which hasn't, in my estimation, done everything they possibly can to keep their platforms safer. And part of this goes back to platform design, which we'll talk about well, and talk about that a little bit now, because I know no industry wants to be regulated.
00:03:34:4 - 00:04:10:9
Unknown
So how are you addressing that? How are you moving forward in in face of the kind of resistance you're facing? Well, it's interesting. And again, that has definitely been the journey. The Online Safety Act of 2015 was the initial enabling legislation that established then the Children's Safety Commissioner. And and it bestows upon me three primary roles. It's the regulator for online safety, but also the coordinator and the educator for online safety.
00:04:11:0 - 00:04:37:9
Unknown
But when I looked at that model, I thought, yes, of course, the first part has to be prevention. We need to develop an evidence base and make sure all Australians have the knowledge and the tools they need to keep themselves safer online or to know where to go when something does go wrong. And if you go to our website at Echo dot gov today, you we've got 1200 pages of really great evidence based materials.
00:04:38:0 - 00:05:00:3
Unknown
So the second element of that or the second piece is protection. And that is where we use our regulatory powers. And I'll walk through both our complaint schemes but also our systems and process powers. But what was missing for me was what about meeting the threat surface for the future? What about that proactive and systemic change? We know technical
00:05:00:3 - 00:05:02:4
Unknown
is always going to outpace policy.
00:05:02:5 - 00:05:27:8
Unknown
So how do I, as an anticipatory regulator, think 18 months to two years ahead? So these, you know, paradigm shifts don't hit us in the face. And so this also involves shifting more of the responsibility back onto the platforms themselves. You know, the fundamental building blocks of the Internet weren't really built with safety and children's best interests in mind, but we know that the most vulnerable.
00:05:27:9 - 00:05:54:5
Unknown
And so this is where the Safety by Design initiative came in. This is also where all the work we do with international engagement and collaboration happens. And then our tech trends and challenges policy briefs. So, you know, to me, I tried to bring safety by design to Microsoft when I was there as the global head of privacy and and safety.
00:05:54:6 - 00:06:24:2
Unknown
I was in the Trustworthy Computing division then, and sitting in product reviews, what we were trying to make sure that we were engineering out security vulnerabilities and preventing data breaches. And so engaging in, you know, security, defense in depth and privacy by design. And I said, well, what about personal harms? What? What about the social interactions that are leading to bullying and child sexual exploitation?
00:06:24:2 - 00:06:57:0
Unknown
And, you know, at the time, the company was becoming really an enterprise company and LinkedIn wasn't in its sights. There were some properties like X-Box Live, which was not the gold standard it is today in terms of safety and Skype, which has long been known as a primary vector for child sexual abuse material. So I brought the idea of safety by design to the executives probably around 2010, and it wasn't something that they wanted to prioritize.
00:06:57:0 - 00:07:29:3
Unknown
So I actually knew that my wonderful 17 year journey there was was probably probably over. But then I had an experience for a few years at Twitter and then Adobe and I, you know, I'd spent a lot of time trying to be that safety advocate or that agitator inside the companies. And you know, I can't tell you how many times trust and safety was referred to as a consent or a friction point.
00:07:29:4 - 00:07:55:1
Unknown
And I thought, now is the regulator, how do I actually do this with the industry rather than to the industry and bring this along? Because again, we're just going to be playing a huge game of whack a mole because the threats will change all the time. The the technology is going to change. We need to have them assess the risks upfront and embed the safety protections at the beginning rather than retrofitting after the harm has been done.
00:07:55:2 - 00:08:12:4
Unknown
So that's the basic concept. And we sat down with 180 different organizations. It took up the first phase just to arrive on the principles, took about nine months, and I think meaningful consultation helps you arrive at the right place. And so those three pillars are service provider
00:08:12:4 - 00:08:19:1
Unknown
responsibility, user empowerment and autonomy, and then transparency and accountability. And there are a whole range of things that companies can do under that.
00:08:19:3 - 00:08:56:5
Unknown
But of course, we know that principles are only useful if they're implemented. So that's when we decided to develop some risk assessment tools and test them with the number of companies. Surface App Best Practice, but give you a light touch module for startups who are just may not even be thinking about safety or avoiding being the next tech wreck, but also one which was much more advanced for mid-tier and enterprise companies because you need a culture of safety from the very, very top down for for products to actually be safer.
00:08:56:6 - 00:09:25:1
Unknown
Well, I want to linger a minute on that point of safety by design, because you were and are, in fact, the world's voice, the the primary advocate for safety by design. What kind of implementation and what kind of impact did you see happening from your advocacy on that? You know, without talking about regulation yet, but what kind of change did did you see taking place?
00:09:25:2 - 00:09:59:3
Unknown
Well, it was actually a slow change and more of an evolution rather than a revolution, I would say. And I think a lot of people were skeptical at first. They were like, well, you've got formal powers. Why would you do this voluntary thing and do it with the industry themselves? But having spent 22 years within industry, you know, I do believe that, you know, a government entity is not going to get to know every technology, every service, every platform, the way that the company does.
00:09:59:4 - 00:10:28:2
Unknown
And they will be best placed to determine what the safety, the appropriate safety interventions are. This was just meant to be a catalyst to get them actively thinking. And my experience within the industry is around trust and safety. We used to talk about competition. Of course, these companies are fiercely competing on a range of things, but one of the areas that they do cooperate on is around the area of safety.
00:10:28:3 - 00:10:54:7
Unknown
And so, you know, a lot of the innovations we've seen around conversation controls, for instance, default settings, safe search settings. They have been pioneered by one company or another, and then the other companies have kind of adopted and adapted those for their services. So I see this as a very positive kind of virality. But I was the lone voice in the wild for a long time.
00:10:54:7 - 00:11:23:5
Unknown
And you would appreciate that I, I, I used Ralph Nader on Unsafe at Any Speed as an example, back in the 1960s, remember on the Chevrolet Corvair and the traffic fatality data, that meant that the US Congress and parliaments all around the globe legit had to legislate slate for the embedding of seatbelts. You know, at the time the car manufacturers pushed back vehemently.
00:11:23:6 - 00:12:06:0
Unknown
But of course, today we take for granted when we get into our cars that there there are seatbelts and we've got the five point harnesses and there are airbags and anti-lock brakes and a whole range of technology features and cars actually compete on safety standards. That has become a selling point. So I've also found that for many people, when you talk in esoteric terms about things like safety by design, and when people don't really understand how the Internet is architected and how things work, using real life analogies helps them understand basically what we're trying to get across.
00:12:06:1 - 00:12:37:7
Unknown
And so when you talk to the everyday person about embedding virtual seatbelts and erecting digital guardrails and, you know, putting in traffic circles instead of stop signs so that people as they're approaching, just slow down a little bit, they may get to the same endpoint at the same time, but they're slowing down to look to the right to be a little bit more safer too, to merge, merge in an orderly way and again, still get there at the same endpoint.
00:12:37:7 - 00:13:07:3
Unknown
So I actually stopped using the auto analogy when and it's interesting, I heard it from Francis Hogan, now the U.S. Surgeon General. So there are lots of people using that. So I've moved on to a water or safety analogy, which I can I can share with you later. Well, I'm interested at our conference in October, you were a featured speaker and you spoke to Kemp Associates and supporters from around the world.
00:13:07:5 - 00:13:37:1
Unknown
And you used that phrase. You used the Ralph Nader phrase, and you said, this is our seatbelt moment. So I'm interested in the kind of resonance that is had and whether you see that kind of change beginning to happen. And I think it's important to point out, as you pointed out, that not only did it did seatbelts save lives, but they improved the business bottom line for these companies, even though they fiercely opposed it.
00:13:37:1 - 00:14:32:0
Unknown
So translate that to where we are today in the tech world. Right? Well, you know, I have used metters former catchcry of moving fast and breaking things. As you know, that era of social media that we didn't want to see repeated, how about, you know, go going the speed limit and getting there safely? But what I think we saw in the March April 20, 23 time frame was the same behavior with air companies and what I call the air Drag race, where you had all of these major companies jockeying for position if they didn't have an air play, they were investing in companies like open air or they were open sourcing their own, you know,
00:14:32:1 - 00:14:38:7
Unknown
A-plus forms. And we saw the same behavior. And so I think that was really a
00:14:38:7 - 00:14:39:4
Unknown
tipping
00:14:39:4 - 00:14:56:7
Unknown
point because governments start to say this is when we started to hear governments talk about air safety. And again, having been in this industry for over 30 years, I've always thought of safety, security and privacy as three legs of the same tool stool.
00:14:56:8 - 00:15:24:3
Unknown
But often safety was the shorter leg when it when it came to, you know, tech companies and where they prioritize things. You know, privacy has has certainly had precedence over safety. And I still believe there's a false binary argument out there. I think we can have all of these things and we have to balance that stool if we're going to get things right.
00:15:24:3 - 00:15:58:6
Unknown
And I can talk to you about where we landed with our standards, but, you know, safety is now people are actually thinking about AI and the existential risk to humanity. But, you know, the shorter term harms that we're already seeing. You know, deepfakes are so easy to make now. You don't need thousands of images, vast amounts of computing power and technical expertise to be able to morph a young woman's head or a girl's head onto a porn stars body.
00:15:58:6 - 00:16:22:5
Unknown
And this is what we've just seen at a school where a teenage boy down, downloaded or harvested images of 50 of his female classmates from social media and created deep, faked image based abuse of them. And it's had a horrible effect, obviously, on the individuals, the schools and the community. And and they're just still aren't the adequate guardrails in there.
00:16:22:5 - 00:16:57:1
Unknown
And again, we've got some provisions in our new standards that will tackle not only the platforms and libraries that host open source AI notifying and undressing apps, but also the apps themselves if they don't have proper safety protections embedded in. Well, and related to that. One of the concerns I know that the Kemp Center has had is the effect on of technology on the mental health of children, the explosion of teen suicide and related mental health related problems.
00:16:57:1 - 00:17:32:3
Unknown
How how are you addressing that? And what is Australia doing to to basically balance social media benefits with social media harms? Well, listen, there's a huge debate. It's happening right now in Australia, probably on the back of Jonathan Haidt speech generation. And I often point out we actually need an evidence base. You know, you can't you can't totally compare Australian children's experiences to those of American children.
00:17:32:3 - 00:18:02:9
Unknown
I mean, I think American children have it tougher with the fentanyl crisis and gun violence and, you know, social inequality that is much more marketed in the US than it is in Australia. But the opposition party has said that they will, within the first hundred days of coming into government, they will institute a social media ban and only allow those over the age of 16 to join social media.
00:18:03:0 - 00:18:39:6
Unknown
And this is where my water analogy has come in and the ruling party has said, Well, we'll look at it. We you know, we have some questions. You know, we don't try and fence the ocean, but we do fence pools in Australia and they're backed by enforcement. And we have things like shark nets and there's the well-known adage that we learn when you're very young on the beach, you swim between the flags because that's the area that the lifeguards can patrol and outside there may be rips which can take you out to sea.
00:18:39:7 - 00:19:07:5
Unknown
I think an analogy is there are algorithmic grips that can lead young people astray. There are predators like sharks or like pedophiles that can you know, that can harm our children. But we teach our children to swim and we teach them to swim at the earliest ages, up until up through the teenage years. We teach it at home, we teach it at school.
00:19:07:6 - 00:19:35:6
Unknown
We basically give them the equivalent of digital literacy. And so this is where I've tried to bring this this in. And what do we mean by social media? Because kids aren't using they're not posting to Facebook like they did nine or ten years ago. They're using ephemeral media, they're using short form video, they're communicating through group chat, they're using and doing encrypted messaging services and gaming platforms and dating platforms.
00:19:35:6 - 00:20:03:0
Unknown
So is that what we mean by social media? And you know, what does the evidence base say about the right age? I think we all agree 13 is probably arbitrary. Three but not every 13 year old is alike. It really depends, depends on individual circumstances, maturity, underlying mental health issues, the quality of quantity of time they're spending, parental engagement, their personal circumstances.
00:20:03:1 - 00:20:28:7
Unknown
So I just want us to have a much more recent debate because we also don't want to take a blow on force approach that effectively means young people will try and circumvent the rules and go to a much darker recesses of the Internet. And we've spent nine years trying to get children to engage in help seeking and to talk to a parent or a trusted adult when something goes wrong.
00:20:28:8 - 00:21:02:2
Unknown
And we don't want to undermine that help seeking behavior. So it'll be interesting to see where it lands. You know, that's a policy the government will decide on. But my other point is I don't see how we could possibly implement or enforce that now, because the vast majority of technology companies we're talking about don't have effective age assurance systems in place and may or may not know the age ages of all of the users on their platform, or at least they haven't indicated they know that well.
00:21:02:2 - 00:21:31:7
Unknown
And as you are aware, in this country, as states have attempted to impose age verification or age assurance, it has provoked a huge response from the tech industry who's challenging it as a violation of free speech and and other constitutional protections. So the challenge is is difficult. And you talk about the role of government. One of the points I want to pursue a little bit is because of you, Australia leads the world in this area.
00:21:31:8 - 00:22:04:8
Unknown
You have done more. You've made you've put Australia at the forefront. But what you've done is also now beginning to be used as a model for other countries and other parts of the world. So I know you've created a global online safety regulators network. Can you talk to us a little bit about that network, about the growth and whether other countries are beginning to do things similar to what you're doing in Australia?
00:22:04:8 - 00:22:21:1
Unknown
Right. Well, I talked about 2023 really being an inflection point, and I very much believe it it it has been that that is the year that the Digital Services Act across the European Union came into force.
00:22:21:2 - 00:22:54:1
Unknown
It you know the the UK online safety bill was deliberated for almost six years and finally came into force in 2023. The Fijians have an online safety commission. Korea has been very active, as has South Africa, particularly in the through the through their hotline around child sexual abuse material. And I guess I'm just of the belief that the Internet is global laws are national and local.
00:22:54:2 - 00:23:30:8
Unknown
And once these other regulators started coming on board and including Ireland, I'm we have to work together so that we don't create a splinter net and so that we can learn from each other, each regulator, every scheme has its own strengths and weaknesses. So, for instance, we have a range of complaint schemes where I'm taking complaints from the government, from the public, all the time when a child is being cyberbullied and the 90% success rate in terms of getting that down, we have an 85% success rate in terms of getting image based abuse down.
00:23:30:8 - 00:24:02:1
Unknown
That includes deepfakes but also sexual extortion content. We've got we're the hotline for Australia and we're also, as you said earlier, tackling terrorist content. We have an adult cyber abuse scheme as well. So all of that is it allows us to remediate harm in in real time. Most of the other countries do not have complaint schemes. Ireland is is easing into a complaint scheme.
00:24:02:2 - 00:24:30:2
Unknown
But what this gives us is a really rich repository of data about what the trends are. But also what the systemic weaknesses are of these companies. So we're figuring out ways to be able to share this intelligence. I'm hoping over time we'll move towards joint enforcement actions. And I've just signed an EU with the European Commission to be a partner in this.
00:24:30:2 - 00:24:59:0
Unknown
And of course in the European Commission doesn't do anything by halves. And they've put a lot of thinking and in resources into this. And our online Safety Act is being reviewed right now as well. So we've seen some leapfrogging. If you look at the penalties in the enforcement measures that many of the European countries have, Of course, they're they're just miles ahead of where where we are.
00:24:59:0 - 00:25:35:9
Unknown
Our penalties and enforcement tools are quite modest. So this allows us to kind of learn from each other and to continue leapfrogging. And I think that's to the benefit of all citizens. We're also trying to do things like explain to people how online safety and content moderation is compatible with human rights, our range of human rights. So we put out a statement that first year and this year Ofcom in the UK is chairing a gathering and we put out a statement around regulatory coherence.
00:25:35:9 - 00:26:02:5
Unknown
My team has been working very hard in what we call a regulatory index so that companies can actually see if they meet Australia's standards in the area of child sexual exploitation. They'll also meet the standards of the Irish and the UK and the 27 countries in the European Union. So it's the the companies will be having to make a lot of changes.
00:26:02:5 - 00:26:30:5
Unknown
And we're trying to we have to recognize that we're never going to have perfect synchronicity, but to have a degree of coherence is really, really important. Well, and I feel obligated to ask you mentioned in the opening that you're an American by birth. You know, you're obviously a key Australian leader right now. What is your sense of where we stand and what the opportunities are in the United States?
00:26:30:5 - 00:26:39:6
Unknown
I know you mentioned the Surgeon General's recent statement on that. What where do we need to go? What's happening now?
00:26:39:6 - 00:27:19:7
Unknown
You know, the simple fact of the matter is if there there were if the US wasn't such a permissive hosting environment and there were there were more constraints, particularly around illegal content like child sexual abuse material and graphic terrorist content, I think we'd all be in a better place and you may know that I was in litigation with X Corp around a violent stabbing video that was deemed a terrorist incident and every other issue and what what I call a formal removal notice.
00:27:19:7 - 00:27:56:3
Unknown
And I issued formal notices to media and to X Corp, but every other company complied, including telegram and export, basically said, screw you, we'll see you in court. So we beat them to it. We we've now they've they've now filed five different cases against us, which is what we've seen happening with NGOs and academic researchers that are actually trying to just understand trust and safety, governance, trying to understand the toxicity and levels of online hate and abuse on these platforms.
00:27:56:4 - 00:28:26:1
Unknown
And I actually am a little bit concerned to the, you know, that code, it didn't get through. And then we've had, you know, three significant Supreme Court decisions, including the rollback of the Chevron doctrine and a very different debate in the United States and in Australia about conservative voices versus progressive voices. Online safety is a very multi partizan issue in Australia.
00:28:26:1 - 00:28:54:1
Unknown
And while the political parties might disagree about the ways to get there, parliament was unanimous in passing both versions of the Online Safety Act. And what they effectively decided is everyone of course supports free, free speech and freedom of expression and recognizes the benefits of the technology. But they basically said we want to draw a line when online discourse veers into the lone lane of online harm.
00:28:54:1 - 00:29:21:7
Unknown
And we want to set up an independent statutory authority to use a set of criteria and to be transparent and accountable, but to help Australians when they're experiencing serious online abuse, and particularly when the platforms fail to act or things fall through the cracks. So we serve as that safety net. It's a very different it's a very different discussion.
00:29:21:8 - 00:29:53:5
Unknown
Well, I think the encouraging part is that there is growing recognition of that and the Kempe Center wants to be an advocate on this. There is and has been engaging through the Camp Foundation on legislative initiatives at the federal level, at the state level. And I think you continually are looked to as as the guide. And I was particularly taken by your comment earlier about the importance of kids swimming between the flags.
00:29:53:6 - 00:30:18:0
Unknown
So I think it's that balance that we're looking for. We're not trying to throw kids out of the ocean or out of the pool. We're simply trying to bring about some reasonable guardrails. You're the world leader in this. Any closing thoughts you'd like to to bring to our audience? Well, I'd just like to thank you and the camp center for keeping the conversation going.
00:30:18:1 - 00:30:48:6
Unknown
It's it's really encouraging to know that there is we are achieving a degree of of cut through. You know we you know, we're in a lucky position here. We've got 26 million people so we can provide those services to everyday Australians when they're experiencing online abuse, something I'm not sure you could scale in a country as large as the United States, but we've been engaging quite a bit with that.
00:30:48:8 - 00:31:34:8
Unknown
The DOJ and Homeland Security on combating child sexual exploitation. We've been working closely with the White House on gender based violence in the US Future of Tech Commission report. They recognized safety by design and then organizations like Thorne and All Tech is Human have taken safety by design to some of the U.S. based AI companies. But what we actually need to see is safety by design in action, not just as a concept or as an initiative, but really meaningful risk assessments, understanding of harms, and using that collective brilliance to engineer out potential harm and misuse.
00:31:34:9 - 00:32:00:8
Unknown
Well, Julie, thank you so much. Julian Grant is the world leader in this fight. eSafety Commissioner of Australia. Thank you for being with us today. Thank you for your courage or your vision and your leadership. You have put Australia at the forefront as we grapple with how to keep children safe in the digital world. And you've made progress in the fight against 21st century child abuse.
00:32:00:9 - 00:32:03:5
Unknown
Thank you so much for all that you do.
00:32:04:2 - 00:32:20:8
Unknown
Thank you for listening to Radio Kempe. Stay connected by visiting our Web site at Kemp Center dot org and follow us on social media.