Danxooo // youtube@4cardspoker
I play and coach Omaha, so my story will be about this game. I use ChatGPT and Github Copilot in the browser.
I'll start with the fact that modern neural networks (language models) are generally quite useless for both training and any analysis. I'll tell you what I tried and what came out of it.
1. Hand Analysis
This is total crap. ChatGPT constantly forgets that you can only use two cards from your hand and counts flushes by one/three cards. Sometimes, "knowing" the river, ChatGPT recommends going all-in with a flush draw on the turn, because we have nuts on the river. One time out of five, after a bunch of clarifications on the rules of the game and explanations that we don't know the river when we play the turn, ChatGPT gives something resembling normal comments. In all other situations, it turns out to be a complete nightmare. So non-roulettes are not suitable for this purpose, it's better to post the hand on the forum or find a coach.
Here's an example, I took the first hand I came across. I use the "thinking" model o4-mini-high.


I won't show you anything beyond the flop, but ChatGPT already mistook the finished straight for a wrap and decided that we were defending against "catch-up draws." It's all just a bunch of words, ChatGPT doesn't get what's going on at all.
2. Working on Theory
Here, too, everything is rather weak. Most of the texts about poker on the Internet are garbage SEO articles from affiliate resources and outdated books, so the information it gives out is relevant, and sometimes even contradicts itself. In addition, standard non-reasoning models like 4o do not know how to calculate odds, forgetting that our call is also part of the pot.

All the formulas are correct, but ChatGPT very often makes mistakes in small details, because of this any attempts to calculate something will fail.
3. Creating Programs
I am a complete zero in programming. When I had to write my first program, I asked ChatGPT to explain to me like a child what to copy and where to paste, and demanded that copy-paste be the only task.
In doing so, I managed to create several working programs:
- Card Removal Effect Calculator for PLO
- Various distribution converters (with varying success
- Not too serious, but still a teamplay analyzer based on hand history
- A bunch of different calculators, like calculating the EV of a bet and the EV of a check on the river
This all works, for the most part, as it should. You write to the AI "I want the program to work like this and the buttons to be green" and it does it. Obviously, there is a certain ceiling on complexity, and it will be difficult to create something more complex than I described above without knowledge and skills in programming. But it can be used for all sorts of jokes.
It works something like this, I'm doing it right now as an example:

He writes code. We send him errors (I didn't even read this):

He sends the updated code and voila:

Everything as ordered, screenshots are saved by clicking.
4. Help in systematizing training and creating content
What neurons can do very well is systematize canvases of text. You can just dump topics, incoherent scraps of ideas for a new video, and it will collect it all into a good and interesting plan and divide it by topic.
But again, this is not a magic pill. ChatGPT won't be able to write a good video script or a ready-made personal training plan, it will just make the process a little more convenient.
It can also be used as a good starting point for creativity. For example, when I was making a content plan for YouTube, I wrote to him something like: "Come up with 10 ideas for PLO training videos." And, frankly, all his ideas were terrible, but I latched onto some words and wording, and was able to come up with several cool videos.
5. Visuals and Images
This includes creating all sorts of tables, infographics, templates for Notion, etc. ChatGPT copes with this very well.
In addition, Sora from OpenAI has learned to write normally in Russian, which allows you to create decent pictures. For example, here is a cover that I ordered from a designer:

And this is what Sora made:

I like the first one more, but I spent about 10 minutes creating the second one, and also sat and chose from 10 different versions, which is definitely cool.
To sum it up, now all the scenarios for using AI for me are more like a smart notebook, and not an assistant for thinking.
Alexander QuantumEnigma
I asked mine: "What is the best way to prompt you so that you give more correct answers?" ChatGPT says that it is necessary to indicate the concept. So, for a calculator, it should be indicated that it is needed with the use of ICM, with the model name (for example, FGS) and ideally also add "As in ICMIZER or HRC", but if GTO is needed, then this should also be mentioned.
Specific examples: I asked for options for correct spots for flop overbet, she gave me some, I showed them to my trainers who know exactly what they are – they say that the spots are correct, but not mandatory. Conclusion: you need to set the task well, because there is a chance of getting quite complex moves, in which the EV will be small. Well, and basically the program will not take into account the character of the opponents, and this is important for us in real game situations. But this (the need to prompt) is generally true for any use of neural networks. If there is a goal to use them specifically for poker, then you can look for ready-made prompts, they probably exist.
In ChatGPT, for analytical tasks, you need to enable the o3 model. Both Grok and DeepSeek have such options. Basically, any AI does not use an analytical model, but one that is more suitable for general conversations, it will have a higher percentage of hallucinations, guesswork, etc. And the o3 model is a cold, calculating thing for which correctness is much more important than being nice to the user. In the foreseeable future, this information will be irrelevant, because ChatGPT 5 will be released, and there, it seems, they promise to unite everything and there will be no hassle with jumping from one model to another. But for now, this is it. DeepSeek has a “thinking” function, for which I single out o3, even in the free version.
I also used AI to analyze databases. ChatGPT really likes to draw graphs and tables in any unclear situation, so everything is fine here. And my friend once fed it the config of his H2N and the frequencies ChatGPT was looking for and asked it to analyze the database – according to his reviews, it did a good job.
I plan to adapt it to the selection and analyze tournaments with SharkScope. It itself cannot collect information via a link, but it can be methodically fed with data in JSON format, which SharkScope is kindly ready to let you download. I think the result will be cool.
From the "funny" – I think that very soon it will be quite easy to turn a neural network into an RTA, because there are already those that are capable of translating from other languages in real time (I don't remember the names, but I saw how they recommended such a tool for calls with foreign colleagues. ChatGPT recently got the Agent option (this is when it takes a virtual machine and works there itself, doing a lot of things. I haven't figured out this option yet, but even from what I've seen, it looks very promising.
To check the answers, the simplest thing is to use other neural networks (for example, I am currently using Perplexity as a search engine and I like how it copes. It is good at relevance, and the basic version of ChatGPT does not always check relevance on the Internet. But again, the o3 model is the head of everything, it is a really cool analytical tool, and its effectiveness is limited solely by the ingenuity of the user. It would not be superfluous to offer the neural network itself to probabilistically evaluate the degree of correctness of its answers.
To summarize. You need to feed the fundamental prompt into long-term memory (aka bio, aka user information) and make clarifying requests within the dialogue. It is important to understand that the neurons, outside of the data that it takes from bio, have a limited memory by context, and this context is still quite small. It is imperative to tell it the role, it would not be superfluous to add that the accuracy of the answers is more important than evasive wording. And be sure to tell it not to rush with answers and ask clarifying questions.
The fundamental problem with any neural networks is that they tend to guess, and the simpler the neural network, the stronger this guessing effect will be. This is not very pronounced in flagships, but you still encounter it when you do something as specific as poker. I think that AI should definitely be used to analyze databases and trends, including when working with large amounts of data. But I would not consider them as a replacement for poker software. At least for now.
Alexander Shtyk
I used ChatGPT as a trainer for practicing preflop – 3-bets and open-pushes in stacks <20bb.
ChatGPT has Nash in his memory (but I didn't check it with the real Nash). I also loaded my calculations from HRC (because the field calls narrower than according to Nash, so the push ranges also expand). I asked him to give me borderline hands (on the edge of openpush-fold) and check them with the charts I loaded and Nash.
First I gave screenshots from HRC in this form, the push range is 10bb from HJ and call from CO.

Then I saw that ChatGPT was occasionally messing up and I uploaded a whole file to him.

But ChatGPT still continued to screw up. When checking, ChatGPT says that the hand is not in the range, although it is there. Sometimes ChatGPT corrects itself when you ask him to double-check, but sometimes it stands its ground. Alternatively, you can ask him to give you a hand, check it yourself on the table and say whether it is right or not, and ask him to keep statistics.
There is no point in using it for more complex things yet, but for preflop/Nash training it is possible, but with the caveat that you should check it yourself and look at the number of mistakes.
In the end, I played around with it for a bit and gave up, because while I was compiling all these files, I learned everything myself.
Sergey Starokrad
I fed ChatGPT hands of 5-card Omaha. First, I put them into a file and asked it to analyze and evaluate how well I was doing in that session. It did a good job:

But I didn't like the assessment of my decisions at all. The chat doesn't seem to understand how to play Omaha yet. It thinks Hold'em is better and more accurate, but that's understandable.
I also asked the AI to give recommendations on the current limit (I play PLO100) and asked what needs to be done to increase the win rate. I asked for recommendations on how to improve the game regardless of the hands. The advice, although obvious, is useful: work on the game, analyze the hands, find a coach and like-minded people.
I asked to simulate the game, with bets and opponents' actions. In general, after several prompts, I managed to do this, although it is much more convenient to do this with the help of poker software.
I asked to select disciplines or poker sites for me, the AI coped with this too. I also sometimes ask it to calculate equity.
Dmitry SnowBeaver // Data Adventures
I don't think any of the modern AI assistants can be used to improve the game (or anything else that requires thinking. Personally, I use this tool exclusively in the process of programming to speed up routine things. If I don't understand something myself, I can't expect ChatGPT's answer to help me.
Recently tried Claude. It analyzes the hand in a very human-like manner, but in the end it produces nonsense, which, however, can only be recognized if you yourself understand poker at a normal level. For example, it can produce a different result for evaluating a game with and without a showdown.
Can AI replace a coach?
A coach can only be replaced by another, better coach :)