AI Reads | Links and Resources

The machine is still learning!

AWS Summit London 2024


A dense agenda for just one day - made it impossible to attend every session. The London AWS EMEA Summit 2024 was all about showcasing the capabilities of Amazon's cloud services, data security was a big topic along with the power of processing everything in ‘the cloud’. Generative-AI was by far the most dominant topic of the day, weaving itself into almost every use case and with Bedrock's impressive list of Foundation Models (except anything OpenAI of course) Amazon's AWS is quite literally offering something for everyone. That said, one of my current projects is looking at ways to bring AI into the classroom and while I did attend a session dedicated to projects in the public sector, Defense was mentioned but not Education (perhaps a sign of the times). Keep an eye on Amazon's Project Kuiper - very similar ambitions to Starlink, Kuiper satellite launches start this year, aiming to have 50% of the constellation connected by July 2026.


Getting started in Gen-AI can be daunting and also costly. For someone seeking to get an idea off the ground,  this still remains an expensive 'game' to play, with costs measured in usage tokens that can easily stack up if considering multimodal LLMs. Through AWS Activate you could potentially get access to funding for start-ups and plenty token credits that will give you time to train your model(s), test and iterate.


The underlying Gen-AI 'sauce' at every presentation was the mix of data vector embedding and the power of RAG (​​Retrieval Augmented Generation) which essentially is how data chunking is applied to a specific document you upload that then allows you to engage with it through an LLM interface. An advancement on from the predictive models of traditional Machine Learning, embedding has allowed much deeper levels of classification and finding commonalities between the meaning of words ....unlocking semantics and context that gives you that conversation you have when engaging with a foundation model through a chat interface. AWS were good at reminding us of the underlying computational mechanics and processing that is needed to deliver and maintain the performance of your models.


A case study on how  AWS supported TUI who used a Gen-AI workflow for content generation + DAM (Digital Asset Management) at scale was a very good example. I could not help but think of the team of writers it would have taken to write all the hotel descriptions, in the same tone of voice, at varying lengths (for desktop online and mobile), with just the right amount of SEO hooks ....and then later in multiple languages.


Carrying that thought further as we consider the future of writers, the power of communication, articulation and the nuanced language when crafting a prompt is becoming the skill to have at the moment.  A talk on prompt engineering in Claude-3 hosted by Anthropic - revealed how even the creators of these foundation models are still learning their capabilities through constant refinement, conversational inputs and validation.


The capabilities of AI agents as described by AWS...


"Agents for Amazon bedrock allow you to define an action schema and get the control back whenever the agent invokes the action. This enables you to implement business logic in the backend service of  your choice." - it goes on to note "...the ability to execute time consuming actions in the background  (asynchronous execution) while continuing the orchestration flow."


To be clear, large enterprise level organisations are already working on ways to automate as many 'human' tasks as possible, and AWS Bedrock along with Microsoft Copilot are working hard to offer capabilities to deploy multiple agents within organisational workflows and customer experiences. The phrase "..keeping the human in the loop..." (HITL) is nothing new, however we are rapidly entering a new age.


April 2024

Written by Sean Simone, IOKA