OpenAI releases GPT-4, a multimodal AI that it claims is state-of-the-art
By Editor - Tue Mar 14, 10:46 am
- Comments Off on OpenAI releases GPT-4, a multimodal AI that it claims is state-of-the-art
- Tweet
OpenAI has released a powerful new image- and text-understanding AI model, GPT-4, that the company calls “the latest milestone in its effort in scaling up deep learning.” GPT-4 is available today to OpenAI’s paying users via ChatGPT Plus (with a usage cap), and developers can sign up on a waitlist to access the API. Pricing is $0.03 per 1,000 “prompt” tokens (about 750 words) and $0.06 per 1,000 “completion” tokens, where “tokens” represent raw text (e.g. the word “fantastic” would be split into the tokens “fan,” “tas” and “tic”). Prompt tokens are parts of inputted words while completion tokens are words generated by GPT-4. It’s been hiding in plain sight, as it turns out. Microsoft confirmed today that Bing Chat , its chatbot tech co-developed with OpenAI, is running on GPT-4.
Read the original post:
OpenAI releases GPT-4, a multimodal AI that it claims is state-of-the-art
- Comments Off on OpenAI releases GPT-4, a multimodal AI that it claims is state-of-the-art
- Tweet