MLNews

Emu2: An Efficient Text-Image-Video In-context Learning Model

Emu2 is just like us, adaptable and intelligent with the versatile abilities for handling multiple tasks seamlessly just like brain of humans. It is super flexible too just like a friend who is expert in solving such problems which he himself never experienced before. Researchers from Beijing Academy of Artificial Intelligence, Tsinghua University and Peking University are the creator of this smart model.

Emu2 Workflow

Practical Implications of Emu2

Emu2 is super flexible and problem solving model. You can give input in the form of images, videos or texts and model acts according to the given instructions. It involves description of images, answering tricky questions or generating new images based on input text. The model just acts like a super big brain computer which can understand the video and can explain anything regarding that video. the researchers made the large multimodal model more best at learning from context by making it more bigger and more powerful.

Video generation through Emu2

The model also works as text-image-video model. An image is given as an input along with the text, the model generates the output in the form of video clips with the given input image.

Emu2 has the great learning ability with just little bit of the information, if the user show this model a picture and give it some hints. The model is capable enough to understand what a user want to do with that picture and Emu2 can perform such tasks instantly without any computational delay.

Incontext learning of Emu2

The model works on the base model of text-image generation models. The above mentioned image is showing the Emu2’s ability of in-context learning, the model takes text prompt as an input and generate the required output the model remember the last generated output and user acts upon further text input given by the end user.

Emu2 image output

This model showed outstanding performance on context learning means that it can easily learns from the given little bit of the information. The humans has the good capability of understanding task and then perform the task using it own senses with the help of brain. It also acts just like a human brain. It is able to generate images from the text prompt efficiently.

The research is available on Arxiv, and code is available on GitHub and the demo is also available. It is an efficient problem solving model with diverse practical implications such as creative applications of smartphones, video generation, advanced research and developments.

I tried the demo of Emu2 provided by the researchers. An image input is provided to the model about “fireworks” and added the text prompt “What is this?”. The model generated the textual response of the given image, whereas, when the model was asked to generate “image of fireworks in Sydney” it was unable to generate image rather responded with the textual output. So, Emu2’s demo is not working the way researchers have claimed!

References


Similar Posts

Signup MLNews Newsletter

What Will You Get?

Bonus

Get A Free Workshop on
AI Development