pangukitsappdev.api.llms package

Submodules

pangukitsappdev.api.llms.base module

class pangukitsappdev.api.llms.base.AbstractLLMApi(llm_config: LLMConfig, chat_llm: Optional[BaseChatModel] = None, cache: Optional[CacheApi] = None)

基类:LLMApi

ask(prompt: Union[str, List[ConversationMessage]], param_config: Optional[LLMParamConfig] = None) Union[LLMResp, Iterator]

问答 :param prompt: 单轮提示词或多轮message list :param param_config: (Optional)覆盖llm的原本的参数配置,用来控制llm的返回信息 :return: LLMResp or Iterator(流式打印)

ask_for_object(prompt: str, class_type: Type[BaseModel], param_config: Optional[LLMParamConfig] = None)

问答 :param prompt: 提示词 :param class_type: 需要LLM转换的类型 :param param_config: (Optional)覆盖llm的原本的参数配置,用来控制llm的返回信息 :return: LLM answer

create_chat_llm_with(param_config: Optional[LLMParamConfig] = None) BaseChatModel
default_create_chat_llm_func(param_config: LLMParamConfig) BaseChatModel

创建chat_llm的默认方法 使用chat_llm的类直接构造,调用这个默认实现是self.chat_llm必须存在 使用param_config构造新的chat_llm :param param_config: chat_llm参数配置 :return: 使用新参数构造的chat_llm

do_create_chat_llm(llm_config: LLMConfig)
get_llm_config() LLMConfig

获取当前LLM的配置 :return: LLMConfig

need_add_new_system_message() bool
parse_llm_response(llm_result: LLMResult) LLMResp
set_cache(cache: CacheApi)

设置缓存 :param cache: 缓存实现对象 :return: void

set_callback(callback: BaseCallbackHandler)

设置Callback回调对象 :param callback: callback对象

class pangukitsappdev.api.llms.base.ConversationMessage(content: Union[str, List[Union[str, Dict]]], *, additional_kwargs: dict = None, response_metadata: dict = None, type: Literal['chat'] = 'chat', name: Optional[str] = None, id: Optional[str] = None, role: Role, tools: Optional[Any] = None, actions: List[AgentAction] = [], **kwargs: Any)

基类:BaseMessage

多轮对话信息 拓展了tool和actions,可用于Agent使用

Attributes:

role: 对话角色 content: 对话内容 tools: 工具集 actions: 当role为assistant时,采取的actions,如果没有采取任何action,则为空数组

actions: List[AgentAction]
role: Role

The speaker / role of the Message.

tools: Optional[Any]
type: Literal['chat']
class pangukitsappdev.api.llms.base.LLMApi

基类:ABC

abstract ask(prompt: Union[str, List[ConversationMessage]], param_config: Optional[LLMParamConfig] = None) Union[LLMResp, Iterator]

问答 :param prompt: 单轮提示词或多轮message list :param param_config: (Optional)覆盖llm的原本的参数配置,用来控制llm的返回信息 :return: LLMResp or Iterator(流式打印)

abstract ask_for_object(prompt: str, class_type: Type[BaseModel], param_config: Optional[LLMParamConfig] = None)

问答 :param prompt: 提示词 :param class_type: 需要LLM转换的类型 :param param_config: (Optional)覆盖llm的原本的参数配置,用来控制llm的返回信息 :return: LLM answer

abstract get_llm_config() LLMConfig

获取当前LLM的配置 :return: LLMConfig

abstract set_cache(cache: CacheApi)

设置缓存 :param cache: 缓存实现对象 :return: void

abstract set_callback(callback: BaseCallbackHandler)

设置Callback回调对象 :param callback: callback对象

class pangukitsappdev.api.llms.base.LLMApiAdapter(chat_llm: BaseChatModel)

基类:AbstractLLMApi

LLMApi的适配器 负责把Langchain的LLM实现类适配到LLMApiAdapter

Attributes:

chat_llm: 内部封装的Langchain BaseChatModel的实现类

class pangukitsappdev.api.llms.base.Role(value)

基类:Enum

An enumeration.

ASSISTANT = {'desc': '助手', 'text': 'assistant'}
OBSERVATION = {'desc': '观察', 'text': 'observation'}
SYSTEM = {'desc': '系统', 'text': 'system'}
USER = {'desc': '用户', 'text': 'user'}
property desc
property text
pangukitsappdev.api.llms.base.convert_message_to_req(messages: List[BaseMessage]) List[Dict]
pangukitsappdev.api.llms.base.get_llm_params(params: dict) dict

用于过滤llm api参数 若配置参数,则返回对应参数字典,否则返回空,用默认参数值 :param params: llm api支持的参数 :return: 消息体中的dict数据

pangukitsappdev.api.llms.factory module

class pangukitsappdev.api.llms.factory.LLMs

基类:object

llms_map: Dict[str, Type] = {'gallery': <class 'pangukitsappdev.llms.gallery.GalleryLLMApi'>, 'openAI': <class 'pangukitsappdev.llms.openai.OpenAILLMApi'>, 'pangu': <class 'pangukitsappdev.llms.pangu.PanguLLMApi'>}
classmethod of(llm_name: str, llm_config: Optional[LLMConfig] = None) LLMApi

根据名字创建一个LLMApi的实现类 :param llm_name: llm的名字,唯一标识一种LLM :param llm_config: (Optional)LLM的相关配置,如果不传递则从默认配置文件中或者环境变量中获取 :return: LLMApi

classmethod of_env_prefix(llm_name: str, env_prefix) LLMApi
Args:

llm_name: llm的名字,唯一标识一种LLM env_prefix: 环境变量或者配置key的前缀

Returns: LLMApi:

classmethod of_module(llm_name: str, llm_module_config: LLMModuleConfig) LLMApi

根据指定的LLMModuleConfig构造LLMApi Args:

llm_name: llm的名字,唯一标识一种LLM llm_module_config: 外部参数传递的LLMModuleConfig配置

Returns: LLMApi

classmethod register(llm_type: Type[LLMApi], llm_name: str)

注册一种llm的类型 :param llm_type: llm的类型,要求是BaseLLM的子类 :param llm_name: llm的名字,唯一代表这个llm的名字 :return: none

pangukitsappdev.api.llms.llm_config module

class pangukitsappdev.api.llms.llm_config.GalleryConfig(_env_file: Optional[Union[str, PathLike, List[Union[str, PathLike]], Tuple[Union[str, PathLike], ...]]] = '<object object>', _env_file_encoding: Optional[str] = None, _env_nested_delimiter: Optional[str] = None, _secrets_dir: Optional[Union[str, PathLike]] = None, *, gallery_url: Optional[str] = None, iam_config: IAMConfig = None, http_config: HttpConfig = None)

基类:SdkBaseSettings

gallery_url: Optional[str]
http_config: HttpConfig
iam_config: IAMConfig
class pangukitsappdev.api.llms.llm_config.LLMConfig(_env_file: Optional[Union[str, PathLike, List[Union[str, PathLike]], Tuple[Union[str, PathLike], ...]]] = '<object object>', _env_file_encoding: Optional[str] = None, _env_nested_delimiter: Optional[str] = None, _secrets_dir: Optional[Union[str, PathLike]] = None, *, llm_module_config: LLMModuleConfig = None, iam_config: IAMConfig = None, llm_param_config: LLMParamConfig = None, openai_config: OpenAIConfig = None, gallery_config: GalleryConfig = None, http_config: HttpConfig = None)

基类:SdkBaseSettings

LLM参数

Tips: 这里嵌套的对象,需要使用default_factory,而不是default Attributes:

llm_module_config: llm站点配置,LLMModuleConfig iam_config: iam相关认证配置IAMConfig llm_param_config: iam认证相关配置LLMParamConfig,默认读取sdk.llm.iam开头的配置 openai_config: openai认证,OpenAIConfig gallery_config: 第三方大模型站点配置,GalleryConfig http_config: http相关配置,HttpConfig

gallery_config: GalleryConfig
http_config: HttpConfig
iam_config: IAMConfig
llm_module_config: LLMModuleConfig
llm_param_config: LLMParamConfig
openai_config: OpenAIConfig
class pangukitsappdev.api.llms.llm_config.LLMModuleConfig(env_prefix='sdk.llm.pangu', *, llm_name: str = 'pangu_llm', url: Optional[str] = None, system_prompt: Optional[str] = None, enable_append_system_message: bool = True, module_version: Optional[str] = None, llm_module_property: LLMModuleProperty = None, cot_desc: Optional[str] = None)

基类:SdkBaseSettings

Pangu LLM的基本配置参数 Attributes:

llm_name: 模型名称 url: 模型url system_prompt: 系统人设 enable_append_system_message: 当设置了systemPrompt后,是否尝试自动添加一个SystemMessage, 在Agent场景下,如果systemPrompt已经拼接在UserMessage了,则会设置为false,不再添加新的systemMessage module_version: 盘古模型版本 cot_desc: cot描述

cot_desc: Optional[str]
enable_append_system_message: bool
llm_module_property: LLMModuleProperty
llm_name: str
module_version: Optional[str]
system_prompt: Optional[str]
url: Optional[str]
class pangukitsappdev.api.llms.llm_config.LLMModuleProperty(*, unify_tag_prefix: Optional[str] = None, unify_tag_suffix: Optional[str] = None, unify_tool_tag_prefix: Optional[str] = None, unify_tool_tag_suffix: Optional[str] = None)

基类:BaseModel

Pangu Agent prompt标志 Attributes:

unify_tag_prefix: 输入prompt 起始占位符 unify_tag_suffix: 输入prompt 结束占位符 unify_tool_tag_prefix: 工具调用起始占位符 unify_tool_tag_suffix: 工具调用结束占位符

unify_tag_prefix: Optional[str]
unify_tag_suffix: Optional[str]
unify_tool_tag_prefix: Optional[str]
unify_tool_tag_suffix: Optional[str]
class pangukitsappdev.api.llms.llm_config.LLMParamConfig(*, max_tokens: Optional[int] = None, temperature: Optional[float] = None, top_p: Optional[float] = None, n: Optional[int] = None, presence_penalty: Optional[float] = None, frequency_penalty: Optional[float] = None, best_of: Optional[int] = None, stream: Optional[bool] = None, with_prompt: Optional[bool] = None)

基类:BaseModel

class Config

基类:object

extra = 'forbid'
best_of: Optional[int]

If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only server-sent events as they become available.

frequency_penalty: Optional[float]
Generates best_of completions server-side and returns the “best” (the one with the highest log probability per token).

Results cannot be streamed. When used with n, best_of controls the number of candidate completions and n specifies how many to return – best_of must be greater than n. Note: Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for max_tokens and stop.

max_tokens: Optional[int]

What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both.

n: Optional[int]

Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics.

presence_penalty: Optional[float]

Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim.

stream: Optional[bool]

是否由调用方提供完整prompt,可选参数,默认不设置

temperature: Optional[float]

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.

top_p: Optional[float]

How many completions to generate for each prompt. Note: Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for max_tokens and stop.

with_prompt: Optional[bool]

Module contents