Casetext has released a new AI tool designed to auto-create the first draft of a legal brief or motion — in minutes.
Jake Heller, Casetext CEO, says he has used the tool to draft a 20-page motion in under 30 minutes.
That feat ordinarily takes most lawyers five-to-ten hours.
“It both reduces the amount of time it takes to write a brief and it makes the amount of time predictable because you don’t have to go down research black holes,” Heller says.
The software also generates a list of cases relevant to the legal document drafted using a transformer-based neural language model, according to Heller.
In other AI-generated writing news:
*Prototype AI Software Predicts If a News Story Will Be Popular — Before It’s Written: Finnish researchers have developed a prototype AI tool that predicts the popularity of a news story – even before it’s created.
Resources to develop the prototype came from Alma Media and the Media Industry Research Foundation of Finland.
The researchers’ thesis: “Is it possible to predict the contribution of a single news story to the willingness of a user to become a subscriber, or stay as a subscriber – before the story is published?”
The results were impressive, according to Atte Jääskeläinen, a researcher on the project and a visiting fellow at the London School of Economic and Political Science.
The quote from Alma Media’s project lead regarding the software was: “Expectations exceeded,” Jääskeläinen observes.
*New AI Software Sniffs-Out Interesting Political Stories: AI researchers led by Nick Diakopoulos have developed prototype software that automatically sniffs-out interesting political stories from election databases.
Diakopoulos is assistant professor, communication studies and computer science, at Northwestern University.
Designed to analyze massive amounts of data from national voting records, the tool can identify interesting developments in voting patterns – such as major leaps in voter turnout in specific counties, cities or states — as well as other head-turning shifts in voting behavior.
*Get Free Training on AI Tools for Journalists: Quartz AI Studio’s John Keefe, an investigations editor, is generously offering free access to videos that train journalists to use AI tools.
“Late last year I taught the online class “Hands-on Machine Learning Solutions for Journalists” through the Knight Center for Journalism in the Americas,” Keefe observes.
“One of the lovely parts about working with the Knight Center is that once the class is over, I’m free to post the videos online.”
The 15 video lessons run 4.5 – 15 minutes.
The first starts with an introduction on how journalists can tap the power of AI.
An added bonus: Many of Keefe’s lessons use the Fast.AI machine-learning library for Python, which was built to make ML easier for people not trained in math or computers.
*Not So Fast: AI-Generated Writing Has It’s Limitations, Copywriter Says: As 2020 seemingly ushers in ever-more reports of publishers and other businesses adopting AI-generated writing, copywriters need not fret, according to content specialist Gary Andrews.
“Machines are useful for quick tasks or tasks that require a large volume of copy testing,” Andrews observes. “And, theoretically, AI should free-up copywriters to work on the more complex, in-depth, challenging pieces of work.
But “don’t throw your laptop in the bin and fill out that Maccas application form just yet,” Andrews adds.
*Fake Web Posts: A Scourge of AI?: Vox writer Joss Fong worries that AI-generated writing will spawn an endless stream of questionable posts and comments across the Web.
Essentially: We’ve entered the new era of fake writing, which should trigger skepticism of every post we find online, she says.
“Bots roam the Internet in huge numbers, primarily deceiving other computers,” Fong observes. “Now, with a decent handle on our language, they have new ways of deceiving humans directly.
“Certainly, it’s been possible to simply hire people to write posts, fake reviews, and misinformation.
But “what this tool adds is scale, language fluency, and the ability to mirror the jargon and writing style of any profession or — with enough samples — any individual.”
*Norwegian Publisher Launches Study on Responsible Use of AI: AI-generated writing pioneer Schibsted is partnering with a leading Swedish university to find ways to responsibly implement AI in publishing.
“It is due time that the media industry starts playing a more active role both in terms of developing new tools and deciding how to use them in a good, sustainable manner,” says Ingvild Næss, Schibsted’s chief privacy and data trends officer.
“We cannot sit and wait for the politicians and regulators to dictate what we need to do,” Næss says. “We need to step up and lead the way.”
Adds Mattia Wiggberg, a researcher at KTH Royal Institute of Technology in Stockholm: “From an academic perspective, Schibsted is quite a unique organization to study.
“We are eager to get to know their data-driven media operations even better through our collaboration and wish to generate insights that are of value both to the industry and academic world.”
*New Firm Promising More AI Tools for Journalists: Startup Applied XLabs is promising to develop AI tools that will auto-analyze databases and serve-up story ideas, insights and other content for news organizations.
Partnering with the Boston Globe, the new company will be lead by AI news industry heavyweight Francesco Marconi.
Marconi helped spearhead AI implementations at both The Wall Street Journal and the Associated Press.
Besides targeting news outlets, Applied XLabs will be going after knowledge workers in other industries.
“If you are able to build products and seeds of information that editors can use and sign off on, then you can quickly expand into other industries,” Marconi says.
*Finnish AI Company Calls for Ethics in AI Journalism: Finnish company Utopia Analytics has released the “Ethical AI Manifesto.”
It’s goal: To get journalists talking about how the industry should adopt AI ethically.
“AI is a tool, and humans are the master,” says Mari-Sanna Paukkeri, Utopia’s CEO. “People should always define what is right and what is wrong.”
In essence: Paukkeri believes journalists should walk away from unethical AI implementations.
*Key Player in AI-Generated Writing Slates June Workshop: Charlie Beckett, director of the Media Policy Project, sponsored by the London School of Economics and Political Science, will be talking AI and news at this summer’s Newswired Conference.
Specifically, Beckett will offer insights to publishers on how they can upgrade their newsroom’s with AI.
Beckett is author of the study, “New powers, New Responsibilities: A global survey of journalism and artificial intelligence,” released last fall.
*Special Feature: Company Reports That Write Themselves
Share a Link: Please consider sharing a link to https://RobotWritersAI.com from your blog, social media post, publication or emails. More links leading to RobotWritersAI.com helps everyone interested in AI-generated writing.
–Joe Dysart is editor of RobotWritersAI.com and a tech journalist with 20+ years experience. His work has appeared in 150+ publications, including The New York Times and the Financial Times of London.