Journalists have great standards and organizations right now, but they aren’t doing much good in a digital world.
JournalList.net and the trust.txt framework makes that good work pay off.
Journalism is absolutely essential to a free society. For better or worse, our society is now inextricably bound up with the digital life of the citizens. It’s time for the gatekeepers in our digital world to have access to the strength that exists today in journalists coming together with a common purpose.
Here’s a detailed look at the background of JournalList, written in the spirit of full transparency by the founder, Scott Yates.
It’s been four years since 2016.
2016 was the year we watched Duterte get elected in the Philippines, and we heard a little about how a campaign of disinformation helped that happen. But we didn’t pay much attention. At least I didn’t.
Then Brexit. And then we heard that disinformation indeed played a part, but nobody really understood it.
Certainly nobody really understood it by the time Trump got elected.
All three happened in 2016, and by early 2017 it was clear that all three of those elections were disrupted by a global effort aimed at sowing discord, chaos, and resentment.
And even though it was made clear by investigators, and now it’s proven, understandable, and fully explained… I ask again: What’s changed?
What’s really different now? What is in place to stop the next attacks?
I would submit to you that even though we’ve spent a lot of money, nothing substantial has changed, with one exception: Those who want to undermine democracy are now more prevalent around the world. There’s more of them, and they are all more sophisticated. New evidence pops up all the time, and the service of doing the work is becoming almost a commodity.
So what can we do that will create some actual defense?
My proposal is this: There are some efforts out there right now that are solid, and legitimate. There are structures out there that could help. There is a lot to work with. The problem is that there’s no information that’s available in a scalable way to let everyone know what is working.
What do I mean by that?
Let’s take an easy example from my home state of Colorado.
We have two excellent organizations for the media, the Colorado Press Association and the Colorado Broadcasters Association. Neither of them are huge or powerful, but they exist, and they have for a long time. They both also know exactly who is a member, and who is not.
One of the really scandalous bits of misinformation that was spread before the last general election in the U.S. in 2016 came from “The Denver Guardian.”
Now, I know there’s no such publication because I live here. But let’s take a hypothetical example of a 73-year-old white man living in Wisconsin. Let’s say that guy sees a story in his Facebook feed about a politician (whom he doesn’t like anyway) in “The Denver Guardian” and it looks real, indeed.
In 2016, Facebook didn’t really do anything to stop the spread of that story an indeed amplified that story because it was getting great “engagement.” And while there is some talk from Facebook about how things are different, there’s not much evidence that anything really is. Facebook still promotes stories based on engagement and that “Denver Guardian” bit was a story that was getting great engagement. We can have a debate about exactly what Facebook should and shouldn’t do, but because I’m not Mark Zuckerberg and chances are you aren’t either, let’s set that particular argument aside.
If we do put that aside, and then we take the harder step of trying to look at it from the perspective of Facebook, (which has to deal with literally billions of users every day) then we might come away with one question: Even if Facebook wanted to promote legitimate journalism and keep disinformation off its platforms, how would it do that? How would Facebook know that the “Denver Guardian” is fake?
The next question that often comes up once we start down that path is this: Do we want Facebook deciding what is a legitimate publication?
What about Google?
Or what about the U.S. government? Or any government?
The answer to all of the above is, clearly, NO!
My view on this is crystal clear: Nobody - and I mean nobody - gets to decide what is a legitimate publication… with one exception: groups of journalists who on their own decide who gets to be in and who does not get to be in their own group.
Which brings me back to the Colorado Press Association and the Colorado Broadcasters Association.
Is the “Denver Guardian” a member of either of those groups? No. Could it be if it applied? Nope. Remember, that wasn’t a real publication and only existed as a clickbait site by someone not even from Colorado.
Those two associations have been around a long time, and they aren’t going to let the reputation of their members get dragged down by letting in some kind of scammer or sockpuppet of a foreign government.
That system works, and works well, with only one real problem: Nobody outside of their immediate sphere of influence knows that it works.
That’s where the this new solution comes in.
The solution is a reporting tool called JournalList.net, a networked list of journals, publications, broadcasters and more.
It works like this:
Let’s start with a publisher, one that’s close to my heart: The Durango Herald. (It was the first place I worked after college.)
The Herald is a small paper with a website in a small town in southwestern Colorado. It has been a member of the Colorado Press Association forever.
Under this new plan, the Herald would get an email from the Colorado Press Association saying that it is going to start using this new system hosted at JournalList.net. In that system, the association is going to do two things:
- Create a trust.txt file in the background of its site. That file will have a list with all the members in good standing. It will be visible in the way that a robots.txt and ads.txt file is visible now, but it will mainly work behind the scenes.
- Send to JournalList.net its membership list.
Each member publication will then create its own trust.txt file that it will put on its own site, much in the same way that even very small publishers put an “ads.txt” and a “robots.txt” file on their sites now.
Then, if the Colorado Press Association decides to, it will notify JournalList.net, pay a small fee, and then JournalList.net will add that information to a machine-readable file maintained on the site.
Similarly, the Durango Herald, if it decides to, will let the Colorado Press Association know that it is joining, pay a very small fee to JournalList.net, and then it will show up in the machine-readable file on the site.
Now imagine that we do that not just for Colorado, but for every state, and for every country. Every journalism trade group. Networks like the AP. Media development organizations. Ad-buying collectives. There are hundreds, perhaps about a thousand of them around the world.
There are examples of fantastic operations of groups of legitimate journalists everywhere.
And of course, there are some new-ish groups that are more directly involved in issues of legitimate journalism.
For the last few months of 2019 I was working for Reporters Without Borders on the Journalism Trust Initiative. I basically stopped working on my previous projects just to work on this because I feel so strongly that the whole world of journalism needs some real standards.
Journalists and publishers have had things they’ve called “standards” forever, but they aren’t standards in the way that the rest of the industrialized world understands them. To be a standard, it has to be created with a process that’s approved by the International Organization for Standardization. (In the frontwards-backwards international world of acronyms, it’s known as ISO.)
I learned a great deal during my time as an entrepreneur in residence at CableLabs (where a previous version of this idea first percolated) about standards, and I had fun writing in a post for Misinfocon about how there really are standards for everything, everything except journalism. Standards and editorial freedom just didn’t go together.
But after 2016, something needed to change, and it’s clear after studying this for more than a year that standards are the first part of some real change.
That’s why I put my work on hold and I was happily pitching in to get the JTI standards done. We need real ISO standards in journalism.
One part of the reason why we need them so much is this: There are groups of legitimate journalist or publishers out there right now that have standards, but they’ve never actually written them down.
Let’s take an example from the world of big media.
There’s a group knowns as Digital Content Next based in NYC. This is a trade group made up of some of the biggest digital publishers in the world. It’s organized itself as a way to speak as one voice in issues that these digital publishers have in common. Right now it doesn’t actually have journalistic standards because every member has its own standards, and is big enough to have lawyers that look at the standards, etc.
Now, Digital Content Next could do the work of coming up with its own set of standards, or it could decide just to adopt the JTI standards. All members would have to agree to abide by the standards, but all those publishers probably will be doing that already.
And because these are big, BIG names in the publishing world, they may want to get some official certification, and Digital Content Next may want to get an official accreditation. That’s how this works: First comes self-assessment; Then comes external certification to check if the self-assessment is valid; then comes external accreditation to make sure the certifier is following a set of rules; and finally there’s a process owner who makes sure that the accrediting bodies are following the rules for the whole process. I think of it like a stack:
- Process Owner
- Accreditors approved by the process owner
- Certifiers sanctioned by an accreditor
- Publisher applying for certification
Does that sound heavy, difficult, and cumbersome? Yes, yes it does.
But remember: That’s a good thing. We don’t want malicious actors being able to pretend that they are following the rules by just sticking a JPEG image of a badge on their own site. We learned that lesson in the early days of the internet when legitimate groups created badges meant to signify trustworthiness, and early digital fraudsters figured out that they could easily put those badges on their own scammy sites, and nobody could do anything to stop them.
Here’s where the JournalList.net comes in. Even if all that was in place today for Digital Content Next — the certification-accreditation system, all of it — the platforms wouldn’t have an easy system to know it. Google and Facebook deal only in ridiculously large, uniform datasets. Now the JTI will likely have its own reporting tool, but it is not yet clear if that would report just about the JTI, or about all other associations and standards, too. Whatever it does will be up to the process owners within the JTI framework.
The JournalList.net site isn’t an accreditation framework, and it’s not an official part of the certification process. Its only job is reporting out the kind of data that can actually be useful at a massive scale.
JournalList.net is an online tool that does the equivalent of taking all those little logos on the back of your inkjet printer, and turning them into a text file that can be read by a robot.
Google and Facebook have robots. Advertisers do, too, and this will help them a lot. Academic researchers, too. (More about the academics below.)
Now, would The Durango Herald have to join in to this whole very complex accreditation framework? Nope. They just need to remain a member in good standing with the Colorado Press Association. If they are AP members, they will get listed twice. Some publishers may belong to a half-dozen or more different groups that get them listed on JournalList.net. (Now you see how this really is a network of lists for journalists and publishers, hence the name.)
At some point, the Colorado Press Association will have to decide if it wants to be a part of the JTI, and then it will also decide if it wants to get accreditation. It won’t be an easy decision, as there will be some significant costs involved, but it may be worth it because the accreditation is something that will also be reported into the JournalList.net, and therefore on to the platforms and advertisers.
And then the JournalList.net list — the one that will have all of the hundreds of journalistic organizations and thousands and thousands of publishers — will all be available to the Facebooks, Googles, advertisers, and researchers of the world.
What about the platforms?
At this point, you may be thinking that JournalList.net is doing a lot of work to make Google and Facebook’s jobs much easier, and that’s true. But that’s not the goal, the goal is to make sure the legitimate journalists get recognized for being legitimate.
But if we are making Google’s job easier, shouldn’t Google pay for that?
That was my thinking when I first had the idea for this, but that’s evolved.
This part creates some challenges, but Google and Facebook will NOT be paying for it. We’ve heard directly from Googlers and Facebook team members that this kind of thing would make the jobs of people inside the platforms much easier. And they have so much damn money, it hurts to not charge them, and instead charge the Colorado Press Associations of the world, organizations that barely have two nickels to rub together.
But if JournalList is going to work it needs to be work of the journalists, by the journalists, and for the journalists. If the big guys are putting money in, well, it will always be suspect, no matter how many arms we put into an arms-length agreement.
Same for governments, and even other journalism organizations from universities or big non-profits—no money from them, either.
Also, the same goes for the certifiers and accreditors. (More on that potential conflict of interest below.)
This is going to be a totally independent group owned and operated by the members who participate.
It will be registered in the United States as a 501(c)6, which is most typically used for things like a chamber of commerce, so technically it won’t be “owned” by anyone, but you get the idea.
Luckily, the costs will not be huge. We’ll just be maintaining a list of lists and the system behind the trust.txt files, so the costs to publishers, groups, and accreditation bodies will be very very low. Also JournalList.net is a nonprofit and it will have totally open books that will be quite simple. And because it’s a nonprofit, it can never be acquired, so the platforms and others can trust that it’s a system that will be around for many years to come.
And while the easiest solution would be to form this as part of the JTI, and get the built in support that would come from being part of the fantastic group of people at RSF, I’ve come to the conclusion that it can’t actually do that.
As I mentioned, the JTI is an absolutely essential program, and I’ve been working hard to get the standards done and when those standards were published December 19th, nobody was celebrating more than me.
But if a reporting tool is going to be universally used and accepted, it needs to do so independent of any one accreditation scheme (as they say in Europe) or framework (as they say in the U.S.)
The fact that there is anything to report is a function of the legitimacy of the JTI, but there are other efforts out there and they should be able to have their work reported in the same system even if they decide not to be part of the JTI. The Colorado Press Association may decide to use the JTI, or it may not. The documents of the JTI make clear that participation in the JTI should be considered a positive sign, but not participating is not any kind of negative sign. I personally hope every legitimate publisher and association in Colorado, America, and the world uses the JTI, but that will be a decision that may take months or years to make and implement, and we need a reporting tool that will work right now.
That reporting tool will also be able to work for other organizations around the world. Projects like Projor in Brazil that creates an “Atlas” of more than 12,000 legitimate publishers in that country. The Trust Project based in California is a true pioneer in this space and has publishers participating around the world.
And there are some groups that may not be really thought of as standards-based organizations, but they actually are. For instance there are ad-buying networks that only admit publishers who meet brand-safety requirements. That’s another way of meeting standards.
But here’s the thing: All of those great efforts right now are essentially invisible to the platforms. Should The Durango Herald get a boost in online credibility because it belongs to the Colorado Press Association? I say yes, but right now that membership means nothing as a digital signal of quality. Same for any publication that’s done the work of participating in the Trust Project, or has been interviewed to be in the Atlas for the Projor project in Brazil, or any of the other great programs out there.
And if Digital Content Next, for example, decides to adopt the ISO-approved journalism standards from the JTI, and get a rigorous (and expensive) audit with certification from an authorized auditor, it will have done all that and the work will still be largely invisible to platforms and advertisers because no web scrapers will be able to see that all that work has been done.
With JournalList.net, all of that will be visible.
Can this system be gamed?
OK, if you are at all skeptical, you will undoubtedly be asking one question right now: How can this be gamed by the malicious actors?
Let’s say that there’s some country, let’s call it Evilland, decides that it is going to set up its own system of certification and accreditation, and go through all the motions of following the rules. Then that Evilland accreditation body comes to JournalList.net and says that it has set up the system, and the trust.txt file, all of it. It now wants to be in the system.
I’ve thought a lot about that, and talked to some of the best possible people I can find to go through this step by step. In short, the question is this: Should we create a barrier to entry so that only certification frameworks that meet some standard can come in, or should we let anyone in who can create a framework and set up a proper reporting structure. In short: Do we as JournalList.net do the evaluation, or do we force those using the data to do their own evaluation?
I’ve decided there can be only one answer to that question, and it’s to let anyone participate who can set up the proper reporting structure, and force the consumers of the data - platforms mostly, but also advertisers and researchers - to do their own evaluation.
For one, this keeps the JournalList.net super small, nimble and free of any controversy. We are just the reporting structure. It avoids us having to answer the question: Who The Heck Are You? That’s a question that gets hurled at anyone making editorial decisions about quality of content, and it’s a legitimate question. We could set it up to where we have a legitimate answer, but the process of doing that could become quite slow and heavy, and then we are constrained in the way that certification and accreditation bodies are, and so we might as well be doing that work.
This is a reporting tool only, and we’re going to leave it to others to do that heavy work.
Another reason is that the users of the data should do the work.
Look, the reporting structure for the JournalList.net will be a bit of work that requires some understanding of technology and an awareness that this trust-based network exists in the first place. That alone will keep out the fly-by-night scam artists who love to create sites and posts one day and then disappear the next.
But if Evilland gets in, well, that will be the point at which the platforms will really need to do the work to figure out what the reputation of each group is, something that can only happen slowly and over time.
This is something Google already does: It gives a rank to every page it scans, and that rank is typically built up slowly over time. Facebook does similar analysis.
It will be a similar thing here, and can take advantage of the ranking that the platforms are already doing.
This is also something researchers can help with. Right now academics are working hard to understand issues of disinformation, and they often lack data to be able to draw any meaningful conclusions. With a massive dataset of independent (and free) data, they will be able to cross-reference their work globally in new and useful ways.
Let’s go back to the example of the Colorado Press Association. Google will be able to see that the pagerank of each of the members varies, but in general is pretty good. All of the members of the association will be helped by the other members of that group all being in the same digital bucket. Easy, right?
What is the signal?
OK, let’s take a slightly harder example: The Office of Communication, a part of the British Government, sort of like the FCC in the U.S. Ofcom, as it’s called, is the body that gives a license to the broadcasters in the U.K. It has given a license to the BBC, and also to a news station that is funded by the government of China. That station broadcast a segment that violated OfCom standards. Ofcom officials have grappled with the question of if it should yank the license of that broadcaster.
Now, let’s say that Ofcom decides to participate in the reporting structure of JournalList.net, especially as it relates to the websites for the broadcasters who are licensed by Ofcom. It would then list its licensed members, and that would be part of what gets reported to the platforms and others.
Now, Google almost certainly ranks Chinese-sponsored media as something it does not want to distribute or promote, especially in the UK, as a reliable news source. But in general Google tries only to use algorithms to make decisions, not people. If they build an algorithm (and this would be trivial for them to do) that gives an indication of quality based on the quality of the other members of a particular bucket of publishers listed on JournalList.net, well, then that becomes a big problem for some publishers.
Here’s how: The Ofcom digital “signal” would be helped by the fact that it has the BBC in it, but it would be brought down by any member like that Chinese-funded media outlet. Conversely, that outlet would get a boost from the fact that it’s in the same bucket as the BBC.
So here’s my hunch of how that plays out over time: Right now the BBC isn’t really hurt by the fact that the Chinese TV station is licensed by Ofcom. Each channel is judged on its own by Ofcom, and ultimately by viewers.
But when the publishers are not judged on their own, but as a group, the BBC may push to get the Chinese publisher kicked out. It doesn’t want it’s brand being down-ranked or otherwise harmed by being mixed into the same bucket as a Chinese propaganda outlet.
That would be, in my mind, proof that the system works. And that’s why I don’t want JournalList.net to be making judgement calls beforehand. I want the marketplace of platforms, publishers, associations, certifiers and accreditors to work for their own ends, and work over time.
Being part of the JournalList.net list will be an initially good sign, but the reputation that builds up over time for publishers and publisher associations is what will really mean something.
What is “news” anyway?
Here’s another way the trust.txt framework could help:
If you went to a browser and typed in “Christchurch” before the 15th of March, 2019, you would probably get a lot of tourism related results. If you typed that in on that horrible day, or for several months after, you’d get news results and lots of suggested YouTube videos about the shooting. Some of those would even have the livestream of the shooting, even though YouTube employees tried to manually delete those as fast as they could.
Deleting videos manually doesn’t sound very, well, algorithmic, but that’s what they had to do. Why? The YouTube team wanted people to be able to get the news, and not get tourism videos when they clearly were looking for information about the incident. And yet they lacked enough reliable tools to be able to only display videos from trustworthy sources.
If only there was such a tool.
With JournalList.net there will be.
Here’s how that aspect will work:
That report, including the YouTube channel, will be part of their trust.txt file. The publishers in turn will be part of one or more associations, and the platforms will evaluate which of those associations are in turn trustworthy.
The current system inside Google has an algorithm that pops up news results based on the search terms entered. That’s why you see news results when you search for “Brexit” but you don’t when you search for “British Bread and Breakfast.”
The only thing the engineers at Google News would have to do is increase the reliance on the JournalList.net signal in showing results for news search terms. That would avoid them having stories like “Las Vegas Shooter Is Rachel Maddow Fan” showing up at the top of the Google News feed. For hours.
And over at YouTube, rather than manually delete thousands of videos, they could just insert a rule that videos for news searches (using the same rules that go into the search box to define “news” searches) could use JournalList.net signals and only show videos from channels that are associated with known groups of legitimate news publishers. If the YouTube developers had that data available a year ago, they would have had much less heartburn in 2019.
Clearly, YouTube didn’t get to be huge by just showing people video from the legacy video producers. If famous native YouTubers want to talk about the news, they can and they should. If they want to be a part of the JournalList.net, they just need to join a group that has some standards.
Do such groups exist? Well, yes. Here’s one example: the Branded Content Marketing Association, the BCMA.
That group says that it is “for branded content practitioners, run by practitioners, promoting best practice, sharing knowledge and growing the branded content industry.”
In this context “best practice” is not a super rigorous standard, but it’s a start. It also has a Code of Conduct and a Constitution, which is another way of saying that it does have standards.
And because it is “run by practitioners” there’s a good chance that the people running it will know if someone is really one of them, or if it’s a sockpuppet run by a foreign government, or a scammer just trying to use the reputation of the other members for its own malicious purposes.
If some other group of YouTubers wants to make its own association just to be sure it can be a part of the JournalList.net, that’s fantastic.
That’s a big part of why we won’t be judging groups a priori or a posteriori. We won’t judge them at all. If the members of the BCMA say that they think it’s going to help them to be part of the JournalList.net reporting structure, good on them. If that reputation develops into a strong one over time, all the better.
(And a small plug here: the BCMA and a lot of other organizations that bring together content creators should think about adopting the JTI standards. Those standards are set up so that digital natives, one-person operations, and new entrants can participate at whatever level they are at. Content creators on YouTube may not self-identify as Journalists with a Capital J, but if they are talking about current events, they are content creators in ways that get to the heart and history of what journalism is all about.)
The concept of JournalList.net is not to keep out anyone who is doing legitimate journalism, or even just goofing around on YouTube. The concept is to encourage all of the legitimate outlets to join together in whatever way they self-define, and then to report the fact that they’ve joined together in a way that the platforms can recognize.
The effect of that will be keeping out those who are creating content not for the sake of the content, but for the sake of grift, or of undermining democracy and spreading chaos.
What about those trying to undermine democracy?
But what about _______? (Insert the name of the media outlet that you personally find most corrosive.)
The question about participating in JournalList.net reporting comes down to this: Is there any kind of association that would take that outlet as a member? If it’s really on the fringe, well, two things:
- They probably are so busy dealing with their own internal demons that they’ll never be able to actually even finish the paperwork to join anything, and,
- No legitimate group is going to want to degrade themselves and their members in good standing by allowing them in.
Let’s go back to the example of Evilland, which could set up its own body, but it would only have Evillandian media outlets as members. Participating would probably actually damage their credibility even more, so they won’t want to try.
That’s the system working.
Or let’s try one other example and name names: Russia Today and Sputnik, two Kremlin-controlled media outlets. (I hesitate to do this because I’ve met people who work for RT, and they were very nice to me. Also, I hesitate to do this because I don’t want to sit on a park bench one day and get poisoned with a nerve gas, which is a thing.)
The Kremlin has shown a high degree of sophistication in its efforts to spread disinformation, so it’s certainly possible that it could set up its own certification body and accreditation structure, and then report that structure into JournalList.net.
We also know that it was the Kremlin that is very active in subverting democracy around the world, as in those stories I linked to above and also from what we know about their activities in the U.S. in 2016, like the infamous efforts in Florida that Russians were later indicted for.
If the Russians wanted that effort to succeed, you might think they’d want to get a JournalList.net listing for the pages they create, but to do that, they’d have to register them using the same accreditation framework and the same bucket as RT and Sputnik.
So, their choice would be to go without a JournalList.net listing, or to get one and have the whole world know instantly that the effort is Kremlin controlled, something they worked hard to avoid in the past. In short, there’s no practical way for the really maliciously created pages to get a JournalList.net listing that helps the Kremlin get traction for those pages.
And we know that disinformation efforts need to be able to hide their associations, as we learned in the recent example of The Epoch Times hiding their connection to spin-off publications. They would go out of their way to keep their association private by avoiding listing them in the registry together.
That’s another example of the system working.
Here’s one more:
Great to see @NiemanLab using the data @acookiecrumbles collected for @TowCenter project – and interesting analysis on the local vs national divide in partisan sites. Of course swing states particularly well served https://t.co/HaHbUCcAwd
— emily bell (@emilybell) July 13, 2020
I did a bit of checking, and most of the sites mentioned in that study are—unsurprisingly—absent from any state press associations. They don’t belong to national groups, either. And if they want to participate and post a trust.txt file showing their common ownership, that’s fantastic. It will make the research that much easier, and make it a snap for the platforms to realize that the sites presenting themselves as local news providers don’t actually have anything to do with news.
That’s the system working once again.
How will the technology work?
I’d like to say something about the infrastructure and approach that is proposed here, because it may seem a bit odd, especially if you are not familiar with the .txt approach.
A brief bit of history: In the early days of web pages and the robots from search engines that looked at web pages, there were sometimes problems. A search engine robot would go “crawl” every page of a site, but because of some peculiarity with the site, the robot would get stuck in some kind of loop. Because of that, a software engineer came up with the idea of building a file that would be an instruction to the robot web crawler. That was called robots.txt, and a huge portion of sites around the world have one. You can see one for yourself, if you want to, by going to a site and typing “/robots.txt” right after the .com or .org or whatever. There you will see that site’s instructions to the robot crawlers.
There is a full protocol behind the robots.txt system. For a while it was just a thing that web page programers knew to build, but then Google essentially took it over. (One of the ways you can tell this is a Google product now is three of the four people listed on the protocol work at Google, and the other is the Dutch guy who originally invented it, so his name is likely still on there more as an homage than an indication that he has any say in what happens now. Also, nearly all the information about robots lives on Google pages.)
There is one other .txt file I mentioned above, ads.txt. This one operates more like a traditional standard. The framework owner is one body, the Interactive Advertising Bureau. It has created a standards document. It’s not an ISO standard, but it’s still a solid piece of standards-like work, and that initiative is now widely used by just about any site that has paid advertising as a way of reducing the amount of advertising fraud.
Both of those are approaches that I considered for this effort.
On the one hand I could have chosen to just work with Google or Facebook to create this framework.
On the other hand I could have tried to have this framework exist as a part of some other group in the way that the Interactive Advertising Bureau set up the ads.txt framework.
Well, here’s why I decided that we needed a third approach for this third framework, the trust.txt file.
First, if I tried to create this within just Google or Facebook, then whichever one wasn’t part of the creation wouldn’t really trust the product of the other one. If I’m trying to aid in the disinformation crisis, leaving out one of the two biggest gateways of the bad stuff would be a half-loaf solution, and also wouldn’t really be trusted by news publishers anyway. (If you aren’t familiar with the news industry, there’s a LOT of bad blood between publishers and platforms.)
Also, news is just different than web protocols, much more politically sensitive. There’s a deep vein at the platforms of being purists about an “open web.”
(We can debate if this is ideological or just covering their collective asses, hoping that they can stay covered by Section 230 of the Communications Decency Act and pretend that they aren’t “publishers” at all and all they do is let others post. That’s a debate for another day. Either way, the platforms are not the right place to host JournalList.net.)
Now, why not be a part of some other group using the model of the Interactive Advertising Bureau?
Again here, the short answer is that advertising and news are just different.
When I was a cub reporter, and before that when I was in what they called in those days the “J-school” I learned about the separation between church and state. That was the shorthand for the separation between advertising and news. I was never supposed to let advertisers have any say in what I wrote.
Some of that ethos remains to this day, and properly so. The publishers who have any integrity will, naturally, be suspicious of any kind of global effort of working with the platforms that is too tied up in any one organization. This whole initiative is about trust, and so the publishers need to be able to trust it all the way up and down.
Also, there’s also a conflict-of-interest problem. This is a part of what created the Great Recession, when auditing firms were too tied up in the outcome of the things that they were auditing.
This is another aspect of why JournalList.net isn’t doing any auditing. If we were a part of the process, it would be bound up with a body that is doing auditing or certification, it could be perceived as weighting some of its efforts toward that body. That’s why we can’t be directly tied to any of the other programs, as great at they are.
And we can’t even be a part of groups that seem like they are only tangentially involved. As an example, let’s say that the JournalList.net was created as a project of the Poynter Institute. That institute has a sterling reputation and tries lots of innovative projects. One of the things it does is run a fact-checking site, PolitiFact.
That site operates as one of a robust community of fact-checking sites, including a different one operated by the Washington Post. They might not talk about it a lot publicly, but I can tell you that those two sites compete with each other in ways that are subtle but very real. And both of those compete with a bunch of others, nearly all of whom have some kind of official backing from someone.
If JournalList.net was a part of Poynter, well, the Washington Post might pass on participating, and with them we’d lose the AP and Digital Content Next and all of the members of each of those bodies, and suddenly we’re just a theoretical project that’s not doing much good for anyone.
All of the legitimate fact checking operations, however, belong to a group called the International Fact Checking Network, which would be another example of a body that could report membership through JournalList.net and with a trust.txt file to the platforms, advertisers and researchers.
There are lots of bodies with lots of degrees of trust. It’s just that there’s no way for anyone to see all of those in one place. That’s what JournalList.net is all about.
Moving forward, together
Thank you for reading this. I hope it had some helpful bits.
The future of journalism is very much up in the air. It stinks that so many of the troubles for journalism were created outside of journalism, and yet need to be solved inside of journalism.
But there just isn’t any other choice.
If you are an association, please set a time with me so that we can talk.
If you are a publisher, fill out this form, or just contact any association you may belong to and let them know about JournalList.
If there’s something I did not cover in the 4,000 or so words on this page, please be in touch.
Thanks for being part of the future of fixing the news ecosystem.
-Scott Yates, Founder, JournalList, Inc.
Actual Social Networks
Another aspect of what is in the trust.txt file needs more explanation:
I mentioned above about the PageRank. The JournalList.net data is exactly the sort of thing that site quality engineers eat up. I have no doubt that the JournalList.net list will become part of that.
But there’s no blacker of a black box than the algorithms that decide what to show you when you search, or when you scroll through your social feed. So even though I am absolutely convinced JournalList.net participation will boost the good and keep out the bad, it may take a while to see those results.
So, how else will it help? Is there something else more tangible and immediate?
Yes, yes there is.
Here are three recent stories that show problems JournalList.net will solve. (They’re all from The New York Times just for consistency, and they are all really on point.)
Americans Trust Local News. That Belief Is Being Exploited.
Russia Tests New Disinformation Tactics in Africa to Expand Influence
How Russia Meddles Abroad for Profit: Cash, Trolls and a Cult Leader
How will we fix those things? The short version is that we’ll be making it easier to know which online publishers are producing news legitimately.
But who is an online publisher and what do they publish?
Google has a pretty good idea that the Washington Post runs washingtonpost.com. What it may or may not know with any real certainty is exactly which Facebook pages are actually run by the Post. Or Twitter feeds. Or YouTube accounts.
One of the things we ask for in generating a trust.txt file is the exact address of every social media account controlled by a publisher. That way the platforms will know which, for example, Twitter accounts actually come from CNN, and any others (like the one with a CNN logo that made some false claims recently) are imposters.
And if there was some sort of certification of that process, that’d be an easy thing for a certifier to check. And then that certification would be something that would be super easy to also report. And then JournalList.net will collect all of that, and make it available in a machine-readable format.
And just like that, the purveyors of misinformation would be denied an important tool: No longer would they be able to create a Facebook or YouTube page and say that it is associated with a known publisher — as happens all the time — because it wouldn’t be part of the trust.txt file, and so would be super easy to flag as being bogus.
Similarly, if a bad actor wanted to create a site that looked like local news, but the platforms notice that the site looks like news and yet doesn’t have any known affiliation with any of the other local news providers in that area, well, there’s a big, bright red flag that the platforms can use to downrank a site.
That’s the first part of why I’m sure this would help the platforms immediately.