Next.js Discord

Discord Forum

Best practices for loading in LLM data

Unanswered
Alligator mississippiensis posted this in #help-forum
Open in Discord
Alligator mississippiensisOP
im developing a project and one of my main goals is that data is generated and rendered as soon as possible. I have 3 tabs all contained within a component that acts as a navbar, each tab is dependent on the previous one. The first tab is pulled from supabase, the next one is generated from an llm based off of the previous one, and the last one is also generated from an llm based off of the one prior to it.

whats the best way to structure fetching? Should I jsut add a useEffect for the llm generated ones inside the component that they are all in ?

1 Reply

Roseate Spoonbill
LLM content streaming is a bit complex topic. I would do it similarly to how Theo Browne build sync layer on T3 chat (he talked about sync layer in multiple videos). To roughly describe it:
1. UI calls backend api route to start LLM processing (e.g. first tab, to second tab)
2. The API endpoint processes the LLM response and writes both to database (e.g. line by line) and as API route response
3. Fast UX route - Writes to API Route response get collected by user who actually called for processing. The data can be sent token by token and processed accordingly on your front
4. General Data route - Writes to Database in larger chunks (e.g. line by line) are there to ensure data will be visible to user even after user refreshes the page during response streaming. In this scenario you can still sync data to user in larger, but still real time data chunks