Introducing 'Jambo'...
To accompany my journey to getting AI superfit I've created a mascot that I've come to call 'Jambo'.
That creation process has provided an interesting insight into using ChatGPT's image generation capability and RunwayML.com
I first experimented with Runway a while back when in discussion with a friend who runs a marketing agency.
I casually mentioned that the latest AI tools could probably bring the animal mascots he uses on his business website to life.
So I tested out the Guinea Pig character and was amazed that a simple prompt of "subject moves its head and speaks" brought the still image to life with surprising detail.
My next experiment followed finding a picture of myself from 25 years ago taken for a work related project.
With this one the idea was to get my much younger self to speak to my older self with some lip-synced audio. The clip below gives you an idea of the output - not the best of 'deepfakes' but an insight into the possibilities and how it can be achieved.
When it came to illustrating this Bootcamp Bulletin I started with DALL-E but got frustrated - particularly with trying to add any sort of text to the image.
I then saw that you can train Runway to create a character that you can then use for more consistent outputs.
It requires a minimum of 9 reference images and so I used ones of me that I had kicking around from over the years.
Once you have trained up the character generator you can reference your character tag in prompts.
Usefully you can use a slider to adjust how realistic you want your character to look and also how 'cartoonish' you wish it to be.
Once settled on a character that was close to what I imagined at the outset of the exercise and experimentation I used the previous experience with generative video prompts to bring 'Jambo' to life.
Depending on the point I've wanted to make in the LinkedIn communications I have now tested a variety of prompts including ...
"subject puts both thumbs up"
"subject points towards audience"
"subject holds up clipboard"