A mysterious new image generation model is beating models from Midjourney, Black Forest Labs, and OpenAI on the crowdsourced Artificial Analysis benchmark.
The model, which goes by the name “red_panda,” is around 40 Elo points ahead of the next-best-ranking model, Black Forest Labs’ Flux1.1 Pro, on Artificial Analysis’ text-to-image leaderboard. Artificial Analysis uses Elo, a ranking system originally developed to calculate the relative skill level of chess players, to compare the performance of the various models it tests.
Similar to the community AI benchmark Chatbot Arena, Artificial Analysis ranks models through crowdsourcing. For image models, Artificial Analysis selects two models at random and feeds them a unique prompt. Then, it presents the prompt and resulting images, and users choose which they think better reflects the prompt.
Granted, there’s some bias in this voting process. Artificial Analysis’ voters are AI enthusiasts, for the most part, and their choices might not reflect the preferences of the wider community of generative AI users.
But red_panda is also one of the better-performing models on the leaderboard in terms of its generation speed. The model takes a median of around 7 seconds to generate an image — over 100 times faster than OpenAI’s DALL-E 3.
So, where did red_panda come from? Which company made it? And when can we expect it be released? All good questions. AI labs increasingly use community benchmarks to drum up anticipation ahead of an announcement, though, so it might not be long before we find out.