Windows Copilot Newsletter #5
Microsoft Copilots all the things; a quarter of completions might be false; why to get emotional with an AI chatbot
Welcome to the Windows Copilot Newsletter! This week has been absolutely chockers with news - much of it flowing from Microsoft’s big ‘Ignite’ event on Wednesday - so let’s dive right in…
Top Stories
At Microsoft’s ‘Ignite’ event - where CEO Satya Nadella fronts the firm’s biggest and techiest customers - Microsoft announced that all of their AI efforts would be branded as ‘Copilot’. Bing Chat is pining for the fjords. Get ready to be Copiloted into senselessness as Microsoft throws their catch-all AI moniker onto everything. Read about that here.
Confirming rumours that had been circulating for a fortnight, Microsoft announced Copilot will be backported to its Windows 10 operating system. As there are well over a billion PCs on Windows 10 - despite Microsoft’s efforts to upgrade them to Windows 11 - this means that the number of PCs and organisations that have access to and deep integration with AI chatbots has effectively tripled. Read about this here.
If you work with PDF documents on your computer - and who doesn’t? - you can now use Copilot to analyse and summarise them from within the PDF reader built into Microsoft’s Edge web browser. That’s a very powerful capability - but think carefully before you send of a sensitive PDF to Microsoft’s cloud for analysis! Read all about that.
Top Tips
Research suggests that to get the best responses from an AI chatbot, it’s a good idea to add a little emotion to your prompts. “This is very important to my career” may sound like pleading, but it turns out that it significantly improves the results. Read that research here.
Using AI chatbots for academic research? Here’s a list of some best practices and tools that will help you work with AI chatbots more accurately and more effectively.
Safely & Wisely
Research has shown that as much as 27% of all responses delivered by AI chatbots could be without any basis in fact. That finding formed the basis of an excellent article in the New York Times. Read it here.
JAMA - the leading medical journal in the United States - published a report that showed how medical misinformation can be generated and disseminated very efficiently by AI chatbot. Read their (very important) paper here.
Google’s AI chatbot Bard was recently repaired to protect it against a ‘prompt injection’ attack. It’s a template for many similar attacks. Read the full analysis here.
Longreads
“The Quiet Question” asks whether we have any idea how, where or why AI chatbots are being used inside our organsations - and what we might do about that lack of knowledge. Read it here.
Book Updates
When Microsoft killed 'Bing Chat’, replacing it with ‘Copilot’, I sent up a flare to my publisher: Could we make one more revision before the book goes to the printer? Those revisions are on my desktop now, awaiting my review. After these changes get my OK, it’s off to the presses!
We’re still on track to hit our 6 December publishing date, so time’s growing short to pre-order a discounted copy of Getting Started with ChatGPT and AI Chatbots. You can do that here.
We have two great events planned around the release of the book - one online, and one in person. Both are free!
If you’d like to join us online at 11 am on the 29th of November for an hour-long exploration of AI chatbots, and how you can use them safely and wisely, please click the button below to register for a free class being offered through Growth Academy.
If you would like to join us in person in Sydney for a book launch event at noon on the 6th of December - and yes, we will have books available for purchase! - please click the link below to register.
We’ll be back again next week with lots more news about Windows Copilot and the explosive development and use of AI chatbots!
Until then,
Mark Pesce
www.markpesce.com // Need advice on AI chatbots?