Running large-scale machine learning models such as ChatGPT on servers can be expensive due to the significant computational resources required to train and maintain these models. Additionally, the cost may vary depending on factors such as the size and complexity of the model, the amount of data being processed, and the level of user demand. Nonetheless, organizations that use AI language models may consider the benefits of their capabilities to be worth the costs of running them on their servers. ChatGPT is the most popular language model in the world right now, and as per recent reports, the cost of running it is quite hefty.

ChatGPT costs $700,000 a day to run, GPT-4 costs may be higher

OpenAI’s language model, ChatGPT, which provides natural language processing (NLP) services, requires significant computing power and can cost up to $700,000 to run. The high cost comes mainly from expensive servers needed to provide feedback based on user input, including writing cover letters, generating teaching plans, and optimizing personal data. Moreover, while Dylan Patel, Chief Analyst at SemiAnalysis, estimated the cost of the GPT-3 model, ChatGPT could now be even more expensive after adopting the latest GPT-4 model.

ChatGPT

Artificial intelligence reasoning costs far exceed the cost of training, and the inference cost of ChatGPT exceeds the training cost. Companies that have been using OpenAI’s language models have been paying high prices for the past few years, and the cost of running the AI model and purchasing Amazon AWS cloud servers for the startup Latitude’s AI dungeon game cost up to $200,000 per month in 2021. Microsoft is developing an artificial intelligence chip, code-named Athena, to reduce the running cost of generative artificial intelligence models, but the chip will be released only for internal use by Microsoft and OpenAI as early as next year.

RELATED:

(Via)