Conversation
| # add to list of partial messages | ||
| partial_messages.append(partial_message) | ||
|
|
||
| partial_messages = get_empty_prompt_messages(prompt) |
There was a problem hiding this comment.
This function already handles partial formatting and takes care of the TODO
There was a problem hiding this comment.
Expanded some mocking behavior to test streaming of various LLM programs
| from llama_index.core.postprocessor.types import BaseNodePostprocessor | ||
| from llama_index.core.prompts import BasePromptTemplate, SelectorPromptTemplate | ||
| from llama_index.core.prompts.chat_prompts import CHAT_CHOICE_SELECT_PROMPT | ||
| from llama_index.core.prompts.chat_prompts import CHAT_CONTENT_CHOICE_SELECT_PROMPT |
There was a problem hiding this comment.
Updated prompt names to indicate whether they are CHAT_TEXT or CHAT_CONTENT (the latter supporting multimodal chat input). open to naming feedback.
90d87f2 to
7cb0ba3
Compare
There was a problem hiding this comment.
reading through the synthesizer classes, one potential change that could be made that would drastically reduce code bloat would be to support formatting text prompts via messages_to_prompt in the multimodal synthesizers and, then, update all the BaseSynthesizer classes to be light wrappers around the multimodal synthesizers.
This would reduce code in the synthesizer classes, tests, and potentially in the prompts as well.
Downside is that the prompts might change a little bit since I don't think they're one to one and messages to prompt appends stuff like "user: " and "assistant: ". This is all relatively easy to handle though.
There was a problem hiding this comment.
I kind of agree here. It'd be nice if the existing synthesizers just "handled" multimodal content. I'm all for anything that reduces the bloat here
There was a problem hiding this comment.
This class contains multiple updates:
1.) supports multimodal
2.) supports streaming structured responses
3.) reduced code duplication across functions combining _refine_response_single and _give_single_response into _update_response
| num_iters += 1 | ||
|
|
||
| assert num_iters > 10 | ||
| assert num_iters == 1 |
There was a problem hiding this comment.
Refine synthesizer with structured response and streaming now yields the entire text as a single chunk after streaming all the flexible models to ensure the complete json is present.
There was a problem hiding this comment.
A lot of extra code lines in tests since many of the synthesizers had few or no tests.
7cb0ba3 to
9b48dd9
Compare
| def get_response( # type: ignore[override] | ||
| self, | ||
| query_str: str, | ||
| message_chunks: Sequence[ChatMessage], |
There was a problem hiding this comment.
ooc why messages and not nodes? Technically the synthesizers are meant to be used with nodes from retrievers.
There was a problem hiding this comment.
Ah I see, that base class expected str here
Maybe we can update the base class to accept a union of types here?
There was a problem hiding this comment.
Alternatively, we just add to the base class: get_response_from_messages() etc. Could be some interesting routing with this approach that would help avoid duplicate code?
Description
Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. List any dependencies that are required for this change.
First of two or three PRs to broadly support multimodal synthesis. This PR:
BaseMultimodalSynthesizerclass to address existing semantic issues with variable naming and logical issues with conversion of nodes to multimodal content.Fixes # 21373
Although the number of lines is large in this PR, the total logical changes are not that great. Fairly repetitive across the basic MultimodalSynthesizers. Since the Refine synthesizer contained more complicated updates, I will follow up with a second PR for the remaining synthesizers so focus can be given to the updates there. Also, many lines were added because there was little to no testing of the synthesizer classes. Some suggestions are made in PR comments on how to reduce total bloat here. Unfortunately though, because of some logical and semantic issues with the BaseSynthesizer class, it seemed like a better idea to make a new Multimodal synthesizer class so as to not introduce breaking changes or overly complicated logic/function signatures in the BaseSynthesizer class.
New Package?
Did I fill in the
tool.llamahubsection in thepyproject.tomland provide a detailed README.md for my new integration or package?Version Bump?
Did I bump the version in the
pyproject.tomlfile of the package I am updating? (Except for thellama-index-corepackage)Type of Change
Please delete options that are not relevant.
How Has This Been Tested?
Your pull-request will likely not be merged unless it is covered by some form of impactful unit testing.
Suggested Checklist:
uv run make format; uv run make lintto appease the lint gods