AI - what does it cost?

Introduction

How much AI costs is very confusing. It's a serious research for your applience. We can read the news, but all we read that it costs energy. Who cares. We can't calculate with that. Thanks like always, non-info fuckers. We should calculate the energy wasted with fake journalism some day.

The costs are content depended and it depends on how expensive the model is that you're using (DUHHH). Sorry, don't go back reading the news. In this article, I won't investigate the average costs, because we all know that YOUR usecase always will cost more somehow. Fine, this is why we're gonna calculate how much it costs using the most expensive and newest way possible for YOUR application.

Decide yourself based on this advanced pre-school mathematics if AI is affordable or a total scam. For the grammar nazi's, complain whatever you want. It's not my strongest thingy but at least it's genuine. Thanks to fuckers like you, people abuse GPT for this stuff.

On top of that, I probably outcode you using a CD and a magnetic needle anytime. Programming languages are way cooler.

One of the reasons that I didn't use AI for this article is that I need words like fuck(face) to emphasize stuff. I would like to emphasize the reader of this article btw.

This is a serious article dispide the choice of wording with real up-to-date data. The math is validated. Sources are listed at where you probably are in life; at the bottom.

In several steps, we'll make some insights to make the GPT pricing less pizdetz and more human by translating their price list from tokens to time.

Converting the abstract technical price to something human

The most pricii and newest models of openai are:

  • gpt-4o, $2.50 per 1M input tokens.
  • gpt-4o-2024-11-20, $2.50 per 1M input tokens.
  • gpt-4o-2024-08-06, $2.50 per 1M input tokens. The only thing more expensive is:
  • gpt-4o-2024-05-13, $5.00 per 1M input tokens. While this article is about the most expensive way to use AI (of openai api).

Ok. So, now we know the price of input tokens. The price of output tokens is four times more expensive. That would mean (actual models only):

  • gpt-4o, $2.50 per 250.000 output tokens.
  • gpt-4o-2024-11-20, $2.50 per 250.000 output tokens.
  • gpt-4o-2024-08-06, $2.50 per 250.000 output tokens.

By now, you will think, thanks for nothing, I can't do anything with this 'information' and it is just listed on the website. Well, what we're about to do is, we gonna assign a value to a token we understand.

How much is 1.000.000 tokens?

Well, what we're about to do is, we gonna assign a value to a token we understand. Many of us have read Harry Potter and know how thick the book is. Since this article is about spending as much possible, we take the thickest Harry Potter book as example. I asked GPT what book it is and in detail so it answered:

The largest book in the Harry Potter series is "Harry Potter and the Order of the Phoenix". Here are some details about its size:
 - Pages: The book has around 766 pages in the U.S. hardcover edition.
 - Word Count: It contains approximately 257,000 words.
 - Sentences: The book has around 15,000 to 17,000 sentences, though the exact count can vary slightly depending on the edition and formatting.
This is the longest book in the series, both in terms of pages and word count.

Ok, three types of examples are listed above like I as asked. Nice. Why three type of examples? Well, some people imagine the best way in sentences, some in pages and some in words I assume. We gonna apply some roundings to use in our calculation. From now on:

  • The book has 750 pages.
  • The book has 250.000 tokens.
  • The book has 16.000 sentences. Every word is a token. So, the book is 250.000 tokens. 1M input tokens costs 2.50. This would mean, sending our book to openai api costs us (1M ( 2.50) / 250.000 (tokens / words) = 4. $2.50 / 4 = $0.625. So sending our book to openai costs $0.625.

What do we know so far?

Well, sending the whole book trough the API costs $0.625. But then we have nothing! No response and we know, that will cost money too! Even worse, the response is four times as expensive as the dat sent. What does that mean?

  • sending the book costs $0.625
  • receiving the book costs $0.625 * 4 (factor) = $2.50 So, if you tell openai api to send exactly back what you sent it, the transfer of this book will cost you $0.625 + $2.50 = $3.125.

I love Star Wars and Harry Potter both! Let's convert the Harry Potter book to Klingon language!

Ok, that's a joke. I know Klingon is the fictional language that is used by a certain alien species and is from Star Trek, not Star Wars. Let's calculate the price of how expensive it is to translate! Fun! Nerd.

Ok, we gonna figure out how much the language differs in word count on average. We will ask Claude (ChatGPT is down during writing this) how much words an average English sentence is when converted to Klingon sentence.

It says that it's impossible to answer that question. Fine. We will make up our own values. Something we can calculate with. From now on, every Klingon sentence is exactly the size of an English sentence on average. Well, I'm sure most people calculate this in no time, but that's not the point. The price is now:

  • $0.625 for sending the book.
  • $2.50 * 3 for translating and receiving the book. It costs thus $9.125 to translate the tickest Harry Potter book to Klingon! That's batshit cheap! I start right away!

I admire the enthousiasm, but I don't freaking think so. There's another factor holding you back.

Convert time to money for certain task!

You won't have a translated Harry Potter in Klingon book directly after paying the price of $9.125 directly.

It costs an LLM time to translate. This is were it gets interesting. We're gonna calculate how much an LLM costs per minute! How long will it take the LLM to convert the book? And why is that interesting since it doesn't change the price anyway? Well, with this info, we can calculate how much an LLM costs per minute for a certain task. In this case, translation task.

To calculate how much an LLM will cost for you, you will need a baseline.

I will translate 100 words from English to Klingon. I will devide the time taken by this process with 100 to calculate the time per word (so time per token.).

It took 60 seconds to translate 100 words from English to Klingon. So, 100 (words)/60 (seconds) = 1.66 words per second.

The Harry Potter book is 250.000 words (tokens). So the time to translate it would be 250.0001.66/(6060)=6916.67 minutes. 6916.67 minutes / 60 (minutes) = 115.27 hours..

It will take 115.27 hours to translate the book from English to Klingon. We know the costs, it was $10.50. $10.50 (total cost) / 115.27 (total time) = $0.09 dollar per hour. $0.09 (dollar) / 60 (seconds) = 0.0015 cents per minute.

Conclusion

You have learned how to calculate the price of AI usage per minute for the highest quality and most expensive LLM. Spoiler alert: A translation service costs $0.0015 per minute and $0.09 per hour.

Fun fact: $10.00 on your openai account is worth 111.11 hours of fun!

For the fact checkers amoung us

I had a huge fight with GPT about the accuracy of my article.

  1. It states that the models I list do not exist. They do and are given here:
  1. It states that the prices for the models are incorrect. Again:
  1. It states that the price of the services are per token, not time.
  • No fuckface, that's the freaking point of the article! Even that not every token is the same price is tought of. ChatGPT confirmed that the math of this article is correct if wasn't a big fat liar about the LLM costs. Ok, that's good enough. You can see that the costs I provided are correct easily and as promised, I chose the most expensive options available for sake of article. I did not take into account that:
  • Caching tokens that are cheaper exist.
  • Batch processing is 50% cheaper. I didn't take this into the article because again, we want to know how much it will cost us maximal using the best quality!

Instead of listening to retoor, chatGPT said not to listen to this article but to this article of 160 days old by Reuters. https://www.reuters.com/technology/artificial-intelligence/openai-unveils-cheaper-small-ai-model-gpt-4o-mini-2024-07-18/?utm_source=jemoeder

Reuters? I'm offended. Fear porn wokies. They should read this article so they can learn to be objective imho. Even tinyllama has more facts than those people.