Workflows from the future
“… I think, therefore I am.” - Harlan Ellison, I Have No Mouth And I Must Scream
TL;DR
Reviewing Gamma - the AI doc builder, and sitting down with their co-founder Jon Noronha to get his opinions on how tools like Gamma will shape the future of work.
3 tools product teams can start playing with today to amp up their productivity, automate the small stuff, and streamline their project management processes.
The most emblematic headlines in AI from the past week.
The big picture - should we take a pause on advanced AI? Elon Musk thinks so...
Oh and you can always just jump to the fun stuff if you’re bored (AI distractions) 🫶
Weekly product highlight - Gamma AI - “Write like a doc, Present like a deck”
Like many product people, I’m a habitual Product Hunt scroller. I love to know what’s next, and having launched on the platform and made it to the top of the podium, I know how exhilarating it is for the teams behind the launches. I was in the midst of one of these Product Hunt binges a couple weeks ago when I stumbled upon Gamma AI.
Gamma AI is the most recent development from slide deck builder Gamma, a company focused on empowering users to create decks with the same ease that they write docs. This new feature is powered by GPT-4 and Gamma claims that with it, users can “do hours of work (compiling content, designing, and formatting a presentation from scratch) in just a few minutes.” With a promise that big, I had to see it for myself.
Over at PlayerZero, my team and I typically have bi-weekly sprints & quarterly roadmapping presentations - both of which would be made much more efficient with a solution like Gamma. So, I decided to put it to the test. The (*hypothetical*) feature we’re proposing here would allow users to upload their consciousnesses into the cloud... here’s what Gamma did with that:
You can view the slide deck I made with Gamma here
All in all, pretty damn impressive. Granted, the generative text has some of the same problems as the underlying technology that it’s built on (imagined context & hallucination due to the fact that the prompt I gave it is fantastical), but the fact that the initial presentation it spat out is actually relatively useable out of the box is one heck of an accomplishment. And with the insanely useful editing tools built in (automatically make a section more concise, expand on a topic, make the copy more exciting, etc.), I’d estimate that from start to finish, building a real product feature presentation with Gamma could take as few as 15 or 20 minutes.
I was so impressed by Gamma that I decided to sit down with their Co-Founder, Jon Noronha to discuss his product journey and get his take on how Gamma and tools like it will impact the future of product. Here’s the transcript:
What inspired you to create your AI-builder and when did you come up with the idea?
The core problem that comes up over and over in our user research is that people want to focus on content, not formatting. We heard so many quotes like "it takes me so long to make things look good, I have no time left to actually figure out what I want to say". AI felt like a natural fit to take over that annoying busywork and let users do the parts they enjoy.
How did you incorporate AI into Gamma? Was Gamma always an AI product or was the AI-builder added later?
We've been working on Gamma for 2 years with the vision of "write like a doc, present like a deck". Our first versions were very manual but we've been playing with GPT3 since it first came out waiting for it to be ready. Back in November we felt they were finally good enough to start using and have been furiously developing ever since.
How has Gamma's algorithm improved to better cater to user needs?
The first version of our AI was just a one-shot generator, but more recently we've been introducing an iterative chat component inspired by ChatGPT. The power of that approach is the AI doesn't have to get it right the first time - you can take the first draft it gives you, and then go back and forth in a dialogue to get the layout and content you want.
What impact do you foresee Gamma having on presentations and collaboration?
I've wasted thousands of hours of my life in Powerpoint and Google Slides. I hope that in a few years, all that time can be channeled towards more productive work!
How does Gamma address AI ethics and user feedback? What’s your system for maintaining continuous improvement?
Our approach is to treat AI output as a first draft, not a finished product. That means people get to curate and refine the content and layout before anyone sees the end result. We've also built in systems to collect feedback from users and use that to guide our development. We're constantly trying to improve the experience and accuracy of the AI.
How do you see AI tools like Gamma impacting the future of work?
AI tools like Gamma are going to have a profound impact on the future of work. They will allow us to move away from tedious, manual tasks and towards more creative, complex work. AI can free us up to focus on things that require higher-level thinking instead of spending so much time formatting and re-formatting. I think this will allow companies to be more agile and efficient, and give their employees the opportunity to do work that's more meaningful and enjoyable.
If you enjoyed hearing from Jon, feel free to give him a follow on LinkedIn, and give Gamma a try for free today!
3 AI-powered tools to turbocharge your efficiency, today
🎙️ ElevenLabs
The product - ElevenLabs claims to have created “the most realistic and versatile AI speech software, ever”, and I believe them. The unique wrinkle here is the ability to clone voices - including your own!
The use case - I could see this being super useful for voicing over product demos, sending quick voice memos to stakeholders, or even adding an extra layer of accessibility to tool tips and similar applications.
📹 Synthesia
The product - Synthesia enables you to create professional videos with text prompt-to-video. All you have to do is type in your text, then Synthesia will animate an AI avatar of your choice to give the appearance that they’re speaking your prompt.
The use case - quickly generate product how to’s and add a *human* face to tooltips with Synthesia’s text to video interface. Learn more about the use case here.
📆 Taskade AI
The product - Taskade was already a great project management tool. But, the addition of their new AI helper has truly changed the game.
The use case - With task management, notes & docs, mind maps, & even video & chat all in one platform and now even more accessible with AI, it might just be the platform to rule them all for your project management needs.
Introducing PlayerZero co-pilot
PlayerZero co-pilot is a complete reinvention of product monitoring only possible today thanks to recent advancements in AI technology. Now, you and your team can monitor customer experiences and find the root causes of negative outcomes in real time without any technical knowledge at all. It’s the easiest way for product folks to answer the toughest questions in their products and quickly hand off actionable answers to their engineers without ever having to learn or interpret a single line of code.
Co-pilot speaks your language - simply ask it what’s breaking, where, and why, and in return you’ll see every customer experience from every angle, from the events and identities to the front end and back end issues causing the experiences. And with powerful connectors like Datadog, Sentry, Mixpanel and Amplitude - PlayerZero is the only platform that connects all the dots to show a complete picture of every single customer’s experience in your product. All you have to do is connect it with an API key, and co-pilot does the rest.
Chronicles of the circuit circus
Instant Videos Could Represent the Next Leap in A.I. Technology - Cade Metz for The New York Times. The big pull quote:
““At this point, if I see a high-resolution video, I am probably going to trust it,” said Phillip Isola, a professor at the Massachusetts Institute of Technology who specializes in A.I. “But that will change pretty quickly.””
Biden eyes AI dangers, says tech companies must make sure products are safe- Jeff Mason for Reuters. The big pull quote:
““Tech companies have a responsibility, in my view, to make sure their products are safe before making them public,” he [President Biden] said at the start of a meeting of the President's Council of Advisors on Science and Technology (PCAST). When asked if AI was dangerous, he said, “It remains to be seen. It could be.””
Don’t tell anything to a chatbot you want to keep private- Catherine Thorbecke for CNN Business. The big pull quote:
““You’ve got all these employees doing things which can seem very innocuous, like, ‘Oh, I can use this to summarize notes from a meeting,’” Mills said. “But in pasting the notes from the meeting into the prompt, you’re suddenly, potentially, disclosing a whole bunch of sensitive information.””
The big picture - Should we press pause on AI?
It’s official, Elon Musk is coming for GPT. And it’s not just Elon... Steve Wozniak (Apple), Andrew Yang (2020 Presidential Candidate), and Emad Mostaque (Stability AI), among many other celebrated names in the field have come together to sign a petition to implement a 6-month pause on training new AI systems that are more powerful than GPT-4 (you can read that letter here).
After hearing about this proposed moratorium, I had to ask myself, why?... why would so many people at the forefront of this revolutionary new technology actively push for it to be effectively turned off for any period of time, let alone for half a year (the equivalent of an eon in tech)? Is it possible that Elon and governments like that of Italy (the first Western country to ban Chat GPT) are actually right? Could Chat GPT and other advanced generative AI models actually do more societal harm than good?
To get to the bottom of this, I decided to break down their arguments into 3 main buckets:
Profound risks to society and humanity due to human-competitive AI systems.
Absence of proper planning and management for advanced AI development.
The need for shared safety protocols and robust AI governance systems.
Seeing as these are broad categories, I think my time is best spent focusing down on one of them for the time being. So, lets take a look at #1 - profound risks to society and humanity due to human-competitive AI systems (if you’re interested in hearing more about the other two in future newsletters, let me know!)
Now, what this really entails is a whole suite of sub-issues, including:
Misinformation and disinformation
AI-driven surveillance
Automation and job displacement
AI arms race & loss of control
Misinformation and disinformation
Let’s start with the internet’s number 1 and number 2 exports - misinformation & disinformation. Anyone who’s played around with Chat GPT, Bard, or any of the other major LLMs knows that they aren’t exactly paradigms of truthfulness. You can pretty much get GPT to make a convincing argument for any side of any topic, no matter how much it’s forced to strain credulity in the process. Not to mention the pure volume of content that generative AI can produce. And that’s not even mentioning the threat of convincing deepfake technology. But... these are all just hypotheticals for now... right?
Wrong! Jack Posobiec, a conservative activist, recently showcased just how easy it is to use deepfake tech to outrage and ignite panic with his deepfaked clip depicting President Biden seemingly announcing that he’s unilaterally reinstating the draft in order to put US boots on the ground in Ukraine. The video (below) is followed up by Jack’s clarification that the clip is a deepfake.
However, in a pretty stark example of the dangers posed by the mass availability of such technology, the clip was immediately re-edited and re-uploaded without the clarification provided by the original poster - racking up millions of views and stirring outrage.
Now, all this looks pretty bad... but I’ve got to say, I think that this problem, while serious, isn’t the underlying issue that could lead to mass disinformation. Far more fundamentally, I have to question why the conduits of the information (US social media outlets, most notably Meta & Twitter) still do so little to verify the veracity of the content on their platforms.
I for one remember all of the headlines in 2016 and 2020. Disinformation, while more sophisticated today, appears to be just as safe on platforms like Twitter and Meta as it always has been. How can we blame a relatively new invention for something we’ve consistently refused to address as a country since at least 2012 (for more, read this MIT Sloan article detailing some of their research on social media misinformation & disinformation).
Disinformation is fundamentally a cat and mouse game - each time the mouse gets more crafty with its methods, its crucial that the cat become more refined in its own strategies. But the social media juggernauts just aren’t keeping up. To learn more about Meta’s struggles with disinformation in the wake of 2016, I highly recommend Cecilia Kang and Sheera Frenkel’s book An Ugly Truth: Inside Facebook's Battle for Domination
AI-driven surveillance
To start this one off, let’s talk about some numbers. Here in Atlanta, where I live, there are about 50 CCTV cameras per 1,000 inhabitants. Go anywhere in Midtown Atlanta, and you’re almost sure to be on camera. This alone is a scary thought for the Orwellian thinkers amongst us, but it goes much deeper than that. More and more, these CCTVs are being connected to AI systems in order to analyze their footage in real time. And it’s not just here in ATL. In fact, according to Carnegie Mellon, “51 percent of advanced democracies deploy AI surveillance systems“.
More and more, this technology is being used by repressive governments and open ones alike to keep tabs on their people. Watch the below ACLU video on the dangers of video surveillance and AI to get thoroughly heebie jeebied:
But it isn’t just video surveillance that skeptics are worried about. In 2020, the Brennan Center released a report on the state of predictive policing - a practice in which AI algorithms are employed to analyze data sets and predict future crime hotspots, determine where to deploy police and even profile and identify civilians considered likely to be involved in criminal behavior. For those acquainted with the Phillip K. Dick classic Minority Report, this may all be sounding a little too familiar.
The obvious concern here is the potential for AI surveillance perpetuating bias and discrimination. After all, AI systems are trained on existing data, which may contain historical and societal biases, ultimately leading to the reinforcement of discriminatory practices. In practice, facial recognition technology has been shown to be less accurate in identifying people of color, women, and other marginalized groups, which can result in wrongful identification and targeting. This can exacerbate existing inequalities and further marginalize already vulnerable communities, who may be subjected to undue scrutiny and harassment by law enforcement and other authorities without proper safeguards on the use of the tech.
There’s a lot more to this topic than we have time to go into today, but suffice to say, AI-driven surveillance is looking like a real problem. However, I don’t exactly see how a 6-month pause on generative AI development is going to help with this particular issue. With this technology already in widespread use amongst local police departments and federal organizations alike, we need to be rapidly building solutions to counteract bias both in the training data the models take in and in their output.
Automation and job displacement
I’m originally from the rust belt - Ohio specifically. Where I’m from, the scourge of automation has long been blamed for drying up good-paying factory jobs and putting hard-working people in the bread line. Throughout my childhood during the boom of off-shoring, it was believed that any manual labor job that could be done more cheaply via off-shoring and/or automation inevitably would.
The perception was always that white collar jobs were the safe ones, that as long as you had a job that required commuting to an office, you’d be fine. Oh how wrong we were...
... “Their [Open AI & UPenn] analysis finds that 80 percent of all jobs in the United States are “exposed” to AI, meaning a large majority of American workers will find AI chat affecting the way they do parts of their jobs. However, 20 percent of jobs are fully exposed to AI, meaning that most or all of the tasks that make up those jobs could be affected by chat AI.”
- Brent Orrell for The Bulwark
In the article quoted above (Here Are the Kinds of Jobs Chat AI is Likeliest to Affect), Brent Orrell suggests that ChatGPT and its competitors are much more likely to impact white collar professions than they are blue collar professions. Ironically, the more academic training a role requires, the more likely it is to be replaced by chat AI. Think legal, data processing, finance & accounting, as the first to go.
Is generative AI going to put white collar workers out of a job? It seems likely that at least some major percentage of the workforce will be laid off or otherwise negatively impacted by these cheap, easy to access tools, but you know what they say about assumptions...
With regards to this issue as a main motivator for the moratorium, I think it’s frankly too late to go back. We’ve seen what the tech can do and it’s quickly becoming integrated into daily life and business operations alike. My genuine hope is that if generative AI does put a large number of white collar workers out of work, it’ll lead to the societal change that was so desperately needed back when manufacturing workers were losing their livelihoods throughout the rust belt.
But... that isn’t to say that there isn’t real, tangible danger associated with pushing forward on AI. In my personal opinion, as chaotic as unleashing these tools on the general population may be, the potential fallout pales in comparison to the nightmare that might well be born out of the AI arms race...
AI arms race & loss of control
The above quote is from the classic short story “I Have No Mouth and I Must Scream”, by Harlan Ellison. In the short story, Ellison presents a post-apocalyptic world ruled by a malevolent AI named AM, which, having been built separately by the US, Russia, and China, links itself together & develops self-awareness - in the process gaining an insatiable hatred for humanity. The AI systematically exterminates the human population, except for a small group that it tortures relentlessly - real cheery stuff.
The story was meant to serve as a cautionary tale about the potential consequences of unchecked AI development in the context of an arms race. Needless to say, we aren’t listening. Just to clarify, I’m not talking about the metaphorical arms race currently taking place in Silicon Valley, but rather, the very real and tangible AI arms race taking place between the US and its rivals abroad.
In many media circles, the “AI arms race” between the US and China is loosely treated as an exaggeration, but it’s very real. With the announcement of the CHIPS & Science ACT in August of ‘22 (get a refresher on the subject with the Vox video below), the US government has effectively declared their intention to cut China off from the hardware required to enable advanced software like generative AI.
In the time since the passing of the bill, AI has grown significantly more intertwined with the goals of US foreign policy-makers. Now, US-made AI products are largely siloed - limited only to the US and other “friendly” markets.
In an address last year, the Biden administration’s national security advisor Jake Sullivan declared that the US would be:
“... charting the way for large-scale technological infrastructure projects that will serve as national assets, like a potential National Artificial Intelligence Research Resource, which would make advanced AI and computing infrastructure available to any researcher in the United States.”
While I’m grateful to see the US investing in pushing this technology forward, I think it’s pretty clear to see the potential danger of treating these technological miracles as weapons. And here’s where I ultimately agree with the aim of the AI moratorium...
As nations and corporations continue to vie for AI supremacy, it’s crucial to take the time necessary to address the ethical dilemmas and potential risks associated with these advancements. If we fail to take the necessary precautions, we risk creating our own version of AM, a remorseless, omnipotent AI that could unleash untold suffering upon the world.
Returning briefly to Elon Musk, he may not have been so wrong back in 2017 when he first called out this problem on Twitter. But I sure hope he is...
AI distractions
Alrighty... things got a bit heavy in that last section huh? Not to worry, here’s some fun stuff folks around the internet have been doing with AI recently to distract us from the inevitable rise of the machines:
📽️ Augmenting reality with stable diffusion
My childhood dreams of owning a wolf may not ever come true, but when they get this synced up with augmented reality, you better believe this filter’s going over my cat.
🤖 Convincing Chat GPT that 2+2=5
Only slightly terrifying that this is possible...
🥴 Live action South Park with Midjourney
You know that guy who made the Harry Potter by Balenciaga video that keeps showing up LITERALLY everywhere? Well, he didn’t stop there...
Signoff
Thanks so much for joining me on this first full edition of Future of Product! Next week, I’ll be diving into a brand new AI tool from Trent Lapinski that can cut the amount of time you spend developing product and brand messaging in half. Plus, I’ll be pitting the most popular copy AI’s on the market against each other to see which one is best in a head to head death match. Can’t wait to see you then!