Is ChatGPT the co-creator of BARD?

And more on Google's partnerships and open-source AI

GM! Welcome to The Status Code.

We're like a team of painters, creating a beautiful canvas of AI headlines & trends for you to admire.

Here’s what we have for today:

  1. 🥳Cerebras Joins the Open-Source AI Party

  2. 🕵️Is ChatGPT the co-creator of BARD?

  3. 🤔Two Sigma Problem of Education

(Estimated reading time: 3 minutes 45 seconds)

Not subscribed yet? Stay in the loop with AI weekly by reading the Status Code for five minutes.

Two Headlines

Two main stories of last week. If you have only ~2 minutes 30 seconds to spare

1/ Cerebras Joins the Open-Source AI Party

When a market gets crowded, open-source it!

That's what Meta did with LLaMA.

Remember Alpaca? It's gone now, but there's a new player: Cerebras-GPT!

So, what's Cerebras? It launched this week, and it's like a seven-headed dragon.

Cerebras is a family of 7 GPTs where there are models from 111M to 13B parameters.

The team behind Cerebras believes that gated models will hamper the world economy.

And they want to ignite a new enthusiasm for open-source.

Cerebras also suggested a new scaling law (using an open dataset) that predicts a model's performance based on its computing budget. Now you can estimate the costs and waste for any model size.

Fun fact: Cerebras claims that a single engineer trained these models (compared to OpenAI's 35).

But will it last?

If you need a solid open-source model, Cerebras is a great choice.

But if you want a ChatGPT alternative, it might not be the one.

Yes, you can fine-tune it. But that cuts down a certain fraction of the user base.

Besides, a guy compared Google’s 2B Flan model and found it’s 20% faster in inference.

But cut them some slack. You can’t judge an open-source model to perform bonkers in its first edition.

What do people think?

Some devs think that sacrificing quality for accessibility is not worth it.

And remember, accuracy isn't about parameter size. Training data quality and quantity matter, too (we learned this from LLaMA and Alpaca).

So, folks are waiting for Open Assistant, the open-source ChatGPT. They are building models which can run on consumer-grade devices.

People want this conversational AI thing.

And some of them believe AI running without the internet is the future.

Anyway, Cerebras’ scaling law is here to stay (at least for some time). It will save huge costs on modeling and development.

2/ 🕵️Is ChatGPT the co-creator of BARD?

Last week, people showed their disappointment with Bard.

But can we blame Google, who said it was an "experiment"?

This week, Google announced new strategies:

1/ Partnering with Replit

Microsoft's AI success started with GitHub Copilot and continued with CopilotX.

Now, they are bringing Copilot for Office, Apps, and well… everything.

So, Google can’t stand it.

They are now partnering with one of the hottest startups in town, Replit.

It is an online IDE service provider and a collaborative code editing platform.

Replit will use Google Cloud infrastructure for Ghostwriter, a code-completion tool.

They plan to build a standalone, context-aware LLM to help even non-programmers code.

Meanwhile, Google Cloud and Workspace developers will gain access to Replit's platform.

Replit flaunts that its IDE users generated 30% of the code using AI.

And unlike Github's Copilot, Replit will not only allow you to code using AI but also run it.

2/ Patching up things with DeepMind (no, they’re not going through Divorce)

Google and DeepMind had their differences. It’s a principles thingy.

DeepMind favors a utilitarian approach, whereas, for Google, business means everything.

So, they didn't have the same ideas. And their parent company, Alphabet, had to step in.

Alphabet settled the feud by allowing DeepMind to operate independently. And asking them to report to a new board.

Why? Alphabet is aware it needs to play the long game.

Allowing DeepMind the freedom might affect Google’s revenue short-term, but if it wants to win the war, it must prepare itself. Not be hell-bound on winning every single battle.

Anyway, therapy seems to be going well.

They're working together on a project called Gemini, likely a Large Language Model with many parameters.

But it's enforced.

OpenAI is on a hunt for ex-employees of top companies. Or, they persuade them to leave the company (lol).

A month ago, DeepMind AI researcher Igor Babuschkin joined Elon Musk to build an OpenAI rival.

Now, Google AI researcher Jacob Devlin left Google for OpenAI.

But, before he left, he also shared concerns about Google training data on ShareGPT.

So, Google fears employee exodus. And it wants to assemble a dream team to tackle the risk. (Avengers, assemble!)

Anyway, which party are you on? Will Google bounce back through these partnerships?

One Trend

1 trend you can pounce on. Reading time: ~1 minute 40 seconds

🤔Two Sigma Problem

Back in 1984, a researcher named Benjamin Bloom dropped a game-changer research study.

It changed how we thought about Education System.

He had three experimental groups:

1. Control Group: 30 students per teacher, regular testing.

2. Mastery Group: Same as control, but with extra tests and reinforcement methods for mastering topics.

3. Tutored Group: 3 students per tutor max, using mastery learning methods.

And the results? Mind-blowing.

Mastery bumped students up a grade level (B to A). (1 Sigma)

Tutoring did even better, jumping two levels (C to A). (2 Sigma)

And the average tutored student outperformed 98% of the Control Group.

He called it the "Two Sigma problem." It was significant.

It revealed personalized learning could:

- Boost everyone's learning game (There's no bad student. There's bad teaching.)

- Shrink the gap between students from different backgrounds

- Help students max out their potential at their own pace

- Make teachers super effective by addressing individual needs

But such a massive improvement ain't easy.

First, resources are scarce. Teachers are swamped, and budgets are tight.

Second, it's difficult to bring the necessary changes to the traditional education system. It needs a redesign from the ground up.

But, the change resistance is real. A major education shift faces pushback from policymakers, educators, and others.

So, many startups tried cracking personalized learning. They tried to increase the budget, collect fair data and develop frameworks. But the progress was slow. And technology wasn't ready to disrupt the industry yet.

Enter Language Models like GPT-4. They're chat-savvy and adaptive, making them feel like real tutors. And it has forced decision-makers to shift their mindset.

These models can analyze students' learning patterns, strengths, and weaknesses.

They can offer tailored content and real-time help (think Khan Academy's chatbot).

For teachers, these models can fill gaps in their students' knowledge.

But they have one problem: They hallucinate. So sometimes, they might give wrong information or teach the wrong lesson.

As we build LLMs and improve them though, it seems possible that one day, they can provide the output we need for tutoring.

And as AI takes a front seat in education, human intelligence might skyrocket.

Meanwhile, for entrepreneurs, it's a golden opportunity. There might not be a bigger problem out there than the Two Sigma problem (maybe, apart from cancer and other health problems).

But there's an opening.

There are endless possibilities. You can go from personalized solutions to developing new teaching methods.

And, you can also try solving human interaction and bias problems.

This is your calling. Embrace the Two Sigma Problem and harness AI's power to revolutionize education.

🦾What happened this week

  • Sam Altman talked with Lex Fridman on his podcast!

  • OpenAI chief scientist Ilya Sutskever also talked about AGI here

  • Gary Marcus posted a letter that proposes a pause in AI for 6 months

  • Luma released a video for 3D API

  • ReTorch AI, a data science AI tool was released

  • ShitFilter.News, a news AI tool was released

  • YouTube video-blog posts converter VideoTap got released

  • Google & Deepmind are working together on a secretive project Gemini

  • Blocktrace builds AI Chatbot to Simplify Blockchain Transaction Tracking

  • Tired of ChatGPT? Try CatGPT

  • Stanford shut down Alpaca and Cerebras introduced Cerebras-GPT

💰Funding Roundup

  • Axios HQ, an AI-powered writing tool raises $20M Series A Funding

  • Perplexity AI has raised and $25.6M led by transformer capital

  • Vital, a health-AI company raised $24.7M led by transformer capital

  • DataDome, an AI bot-blocker company closes $42M in Series C Funding

  • Labviva, a life science raised $20M led by Biospring Partners

  • British AI startup Fetch.ai secures a $40M funding round from DWF Labs

  • Stratyfy, a predictive analysis model raised $10M in funding

  • Viz.ai, an AI health platform received a funding of $510K

🐤Tweet of the week

😂 Meme of the week

That’s it for this week, folks! If you want more, be sure to follow our Twitter (@CallMeAIGuy)

🤝 Share The Status Code with your friends!

Did you like today's issue?

Login or Subscribe to participate in polls.

Join the conversation

or to participate.