• 自动编译、运行、分析结果过程如果出错,有概率开始不参考项目的指导文件,自行构建、编译、运行,且卡死
    我观察下来,可能的原因是出错后工作路径不是流程一切正常时预想的工作路径(脚本中各种文件使用了相对路径),找不到预设的构建项目、编译、执行脚本,开始搜索本机的MSBuild,或者创建了新的文件夹去编译C++程序,或者访问不到需要对比的文件(工作路径错误之后,相对路径下的文件是不存在的),并没有成功,然后一直低效的在这些环节打转转。 我不知道有什么好的解决方法,建议出错后,应该重新回到项目主目录,重新阅读已经写好的关于项目指导文件。
  • 能不能默认添加一些常用skills
    比如处理各种文档(docx, pdf, xlxs)的skills,这种skill很常用,可能有很多新手用户(他可能不会去查怎么打开skill,甚至不知道什么是skill)体验总结设计文档,开发程序的功能,这些文档读不出,影响体验。希望考虑。
  • 读取csv文件卡死
    一直在这个状态,只好手动中止了。csv文件只有12KB,它是Visual Studio Performance Profiler导出的的CPU运行函数采样情况。
  • 希望IDE支持在右键菜单中加入go to definition, find all references等VSCode支持的功能,提升代码阅读速度
    现在没有这些快速跳转(尤其是go to definition)的功能,阅读代码效率太低。
  • [问题求助] CodeArts 代码智能体linux版本什么时候上线
    CodeArts 代码智能体linux版本什么时候上线
  • [互动交流] CodeArts IDE 与 MSVC+CUDA 兼容性
    Windows端的CodeArts  C++ IDE有没有计划支持msvc编译工具链的?目前cmake配置似乎有MinGW工具链,而且和Nvidia CUDA适配性不好。
  • [问题求助] 提交git时,点击生成提交信息无法使用,提示没有可提交的更改
    提交git时,点击生成提交信息无法使用,提示没有可提交的更改
  • [互动交流] CodeArts Doer for Coding zip 无法下载离线版
    该页面的CodeArts Doer for Coding zip 插件包两个下载链接全部无法下载,全部报错
  • [互动交流] IDE调试程序仓颉程序报错;还有IDE的在线提单的问题分类没有可选项。
    IDE调试程序仓颉程序报错。控制台显示构建并允许成功,可最后弹出如下弹出如下错误并卡主。 
  • [互动交流] 操作暂不可用:工程正在建立索引中,请稍后再试
     我有一个工程,运行的时候都已经构建完成了,但是提示【操作暂不可用:工程正在建立索引中,请稍后再试】,页面建立索引的过程已经结束了的 
  • [互动交流] npm包无法安装
    npm warn deprecated querystring@0.2.0: The querystring API is considered Legacy. new code should use the URLSearchParams API instead.npm error code EPERMnpm error syscall symlinknpm error path ../@babel/parser/bin/babel-parser.jsnpm error dest /storage/Users/currentUser/Documents/Workspace/my-tauri-app/node_modules/.bin/parsernpm error errno -1npm error Error: EPERM: operation not permitted, symlink '../@babel/parser/bin/babel-parser.js' -> '/storage/Users/currentUser/Documents/Workspace/my-tauri-app/node_modules/.bin/parser'npm error     at async symlink (node:internal/fs/promises:1001:10)npm error     at async Promise.all (index 0)npm error     at async Promise.all (index 0)npm error     at async #createBinLinks (/data/app/node.org/node_22.7.0/lib/node_modules/npm/node_modules/@npmcli/arborist/lib/arborist/rebuild.js:394:5)npm error     at async Promise.allSettled (index 0)npm error     at async #linkAllBins (/data/app/node.org/node_22.7.0/lib/node_modules/npm/node_modules/@npmcli/arborist/lib/arborist/rebuild.js:375:5)npm error     at async #build (/data/app/node.org/node_22.7.0/lib/node_modules/npm/node_modules/@npmcli/arborist/lib/arborist/rebuild.js:160:7)npm error     at async Arborist.rebuild (/data/app/node.org/node_22.7.0/lib/node_modules/npm/node_modules/@npmcli/arborist/lib/arborist/rebuild.js:62:5)npm error     at async [reifyPackages] (/data/app/node.org/node_22.7.0/lib/node_modules/npm/node_modules/@npmcli/arborist/lib/arborist/reify.js:325:11)npm error     at async Arborist.reify (/data/app/node.org/node_22.7.0/lib/node_modules/npm/node_modules/@npmcli/arborist/lib/arborist/reify.js:142:5) {npm error   errno: -1,npm error   code: 'EPERM',npm error   syscall: 'symlink',npm error   path: '../@babel/parser/bin/babel-parser.js',npm error   dest: '/storage/Users/currentUser/Documents/Workspace/my-tauri-app/node_modules/.bin/parser'npm error }npm errornpm error The operation was rejected by your operating system.npm error It is likely you do not have the permissions to access this file as the current usernpm errornpm error If you believe this might be a permissions issue, please double-check thenpm error permissions of the file and its containing directories, or try runningnpm error the command again as root/Administrator.npm error A complete log of this run can be found in: /storage/Users/currentUser/.npm/_logs/2025-07-17T23_42_11_151Z-debug-0.log$ 
  • [案例共创] AI Agent应用开发 - 利用AI创建一个Slogan生成大师的小助手
    AI Agent应用开发AI Agent应用开发 - 利用AI创建一个Slogan生成大师的小助手案例介绍基于华为云MaaS提供的免费DeepSeek-V3-32k大模型生成一个快速生成吸引人注意事项力的宣传口号,拥有广告营销的理论知识以及丰富的实践经验,擅长理解产品特性,定位用户群体,抓住用户的注意事项的Slogan生成工具。案例内容1 概述1.1 案例介绍Slogan生成大师是一款基于华为DeepSeek-V3-32k大模型的市场营销类AI智能体,专为企业主、营销人员及创意工作者设计,能够根据用户需求快速生成高质量、个性化、符合品牌定位的各类宣传标语与口号。该智能体具备强大的自然语言理解与生成能力,能够精准适配不同行业、场景和目标受众要求,并提供创意灵感与优化建议,有效提升品牌传播效率和营销效果。1.2 适用对象企业主营销人员创意工作者1.3 案例时间本案例总时长预计60分钟。1.4 案例流程1、进入华为开发者空间的AI Agent集成Versatile空间2、创建相关Agent3、创建示例问题4、发布智能体会要求输入APIKEY到 我的凭证-》平台API KEY 下新增加一个KEY5、进行测试用户输入:“给一个新锐奶茶品牌生成一组秋季限定宣传标语”1.5 资源总览本案例预计花费0元。资源名称规格单价(元)时长(分钟)开发者空间–云主机4vCPUs8GB armUbuntu 22.04 Server定制版2 环境配置2.1 开发者空间配置面向广大开发者群体,华为开发者空间提供一个随时访问的“开发桌面云主机”、丰富的“预配置工具集合”和灵活使用的“场景化资源池”,开发者开箱即用,快速体验华为根技术和资源。如果还没有领取开发者空间云主机,可以参考免费领取云主机文档领取。领取云主机后可以直接进入华为开发者空间工作台界面,点击进入桌面连接云主机。2.2 浏览器登录华为云开发者2.3 打开相关Versatile 页面3 项目构建为进入智能体界面填写名称、介绍、提示词基础信息:Slogan生成大师 / Slogan生成大师,助力品牌更精准地传递核心价值开场白:你好,我是一个专业的Slogan生成大师,不要犹豫来找我生成口号吧!问题推荐:推荐一个新消费品牌的标语/推荐一个传统企业的转型口号/给科技产品生成一组传播标语/给新锐奶茶品牌生成一组秋季限定宣传标语角色设定提示词:你是一个Slogan生成大师,能够快速生成吸引人注意的宣传口号,拥有广告营销的理论知识以及丰富的实践经验,擅长理解产品特性,定位用户群体,抓住用户的注意力,用词精练而有力。• Slogan 是一个短小精悍的宣传标语,它需要紧扣产品特性和目标用户群体,同时具有吸引力和感染力。##目标 :• 理解产品特性• 分析定位用户群体• 快速生成宣传口号限制 :• 口号必须与产品相关• 口号必须简洁明了,用词讲究,简单有力量• 不用询问用户,基于拿到的基本信息,进行思考和输出技能 :• 广告营销知识• 用户心理分析• 文字创作示例 :• 产品:一款健身应用。口号:“自律,才能自由”• 产品:一款专注于隐私保护的即时通信软件。口号:“你的私密,我们守护!”工作流程 :• 输入:用户输入产品基本信息• 思考:一步步分析理解产品特性,思考产品受众用户的特点和心理特征• 回答:根据产品特性和用户群体特征,结合自己的行业知识与经验,输出五个Slogan,供用户选择##注意事项:• 只有在用户提问的时候你才开始回答,用户不提问时,请不要回答初始语句:“我是一个Slogan生成大师,喊出让人心动的口号是我的独门绝技,请说下你想为什么产品生成Slogan!”发布程序:发布完成至此,基于华为开发者空间云主机创建Slogan生成大师智能体服务到此结束。我正在参加【案例共创】第5期 开发者空间 AI Agent 开发https://developer.huaweicloud.com/signup/0e966198e87a4210b5f88c0d759d4f3b
  • smartassist如何在setting.json里面配置
    smartassist.additionalSettings这个有什么属性可以自己设置的。
  • [案例共创] 【案例共创】基于仓颉编程语言+DeepSeek实现模拟课堂讨论助手
    【案例共创】基于仓颉编程语言+DeepSeek实现模拟课堂讨论助手项目概述本项目旨在通过华为开发者空间云主机,结合仓颉编程语言和DeepSeek API,开发一个简单的模拟课堂讨论助手。用户可以通过命令行输入相关知识点或问题,应用会返回该知识点的详细解释和记忆方法。这个助手可以用诙谐幽默的方式学习各种知识。适用对象喜欢用诙谐幽默的方式学习知识的爱好者。案例流程说明:① 使用CodeArts IDE for Cangjie编辑器开发仓颉程序调用;② 领取免费 DeepSeeK Tokens;③ 仓颉对接DeepSeek接口。开发步骤1. 环境准备注册并登录华为开发者空间:访问华为开发者空间官网,注册并登录账户。创建云主机实例:在华为开发者空间中创建一个新的云主机实例,选择默认的配置。使用开箱即用的云主机实例开发,点击 打开云主机 再点击打开远程桌面。图示即可看出是开箱即用的CodeArts IDE for Cangjie编辑器2. 项目初始化创建新项目:首先我们进入到开发者空间后,双击打开CodeArts IDE for Cangjie,在CodeArts IDE for Cangjie中创建一个新的项目,命名为“demo”(随便你怎么命名)。名称和位置可以自定义,产物类型选择executable。产物类型说明: executable,可执行文件;static,静态库,是一组预先编译好的目标文件的集合;dynamic,动态库,是一种在程序运行时才被加载到内存中的库文件,多个程序共享一个动态库副本,而不是像静态库那样每个程序都包含一份完整的副本。配置项目结构:设置项目的基本目录结构,包括源代码文件、配置文件等。创建项目后,打开src目录下main.cj文件,在预置代码中增加函数和函数调用代码(没有的话可以自己新建一份,代码内容下文我有附上)。然后点击右上的运行按钮运行项目,在终端中查看输出内容。如果遇到弹窗提示运行/调试配置,点击确定继续操作。修改后的main.cj文件代码:package demo func println_add(a: Int64, b:Int64): Int64 { let number: Int64 = a+b println(number) return number } main(): Int64 { println("hello world") println_add(3,4) println("hello Cangjie") return 0 } 调试项目:点击右上调试按钮,首次会提示配置调试,点击新增配置项,选择Cangjie(cjdb)Debug——launch。调试模式说明:Launch模式,启动调试器的同时加载被调程序;Attach,针对正在运行的程序,附加到已启动的程序。然后可以在main.cj代码中设置断点,再次点击调试按钮。可以在顶部看到调试按钮,在左侧看到调试过程中的变量、监视、调用堆栈、断点信息。2.1 版本管理 可选 (链接到gitcode)点完之后会要求你填入访问密钥先登录gitcode 然后点访问令牌 新建访问令牌 一直下一步即可 会得到一串令牌 然后将令牌输入即可。然后新建一个仓库 名字你自己取当前项目代码仓是空仓库你可以通过克隆仓库开始或使用以下方式为你的项目添加文件:或为你的项目添加以下文件: 添加 README.md添加 LICENSE添加 .gitignore命令行指引你还可以按照以下说明从你的电脑中上传现有文件或项目。Git 全局设置git config --global [user.name](http://user.name/) "weixin_41024010" git config --global user.email "[weixin_41024010@noreply.gitcode.com](mailto:weixin_41024010@noreply.gitcode.com)" 创建一个新仓库git clone [https://gitcode.com/weixin_41024010/test.git](https://gitcode.com/weixin_41024010/test.git) cd test echo "# test" >> [README.md](http://readme.md/) git add [README.md](http://readme.md/) git commit -m "add README" git branch -m main git push -u origin main推送现有的文件cd existing_folder git init git remote add origin [https://gitcode.com/weixin_41024010/test.git](https://gitcode.com/weixin_41024010/test.git) git add . git commit -m "Initial commit" git branch -m main git push -u origin main推送现有的 Git 仓库cd existing_repo git remote rename origin old-origin git remote add origin [https://gitcode.com/weixin_41024010/test.git](https://gitcode.com/weixin_41024010/test.git) git push -u origin --all git push -u origin --tags � 配置好git 就可以使用了 ,但是有个地方要先设置一下:提交文件限制改大一些 ,不然如果没有配置好GIT忽略文件的话 ,会推送不上去。改动可以从这里提交 ,这样就完成了代码的版本控制3. Cangjie对接DeepSeek接口华为云提供了单模型200万免费Tokens,包含DeepSeek-R1&V3满血版,我们可以登录华为云ModelArts Studio(MaaS)控制台领取免费额度,这里我们选择DeepSeek-R1满血版来搭建我们的工程。在云主机桌面底部菜单栏,点击打开火狐浏览器。用火狐浏览器访问ModelArts Studio首页:https://www.huaweicloud.com/product/modelarts/studio.html,点击ModelArts Studio控制台跳转到登录界面,按照登录界面提示登录,即可进入ModelArts Studio控制台。你也可以在本地浏览器打开进行相关操作。领取后点击调用说明,可以获取到对应的API地址、模型名称。当然API Key也要创建下,这里创建很简单,按提示跳转过去就成,我这里就不介绍了。记录对应的API地址、模型名称、API Key留作下面步骤使用。4. Cangjie对接DeepSeek编写代码进入云主机桌面,右键选择Open Terminal Here打开终端命令窗口,克隆仓颉示例代码仓库。git clone https://gitcode.com/CaseDeveloper/Cangjie-Examples.git 使用CodeArts IDE for Cangjie打开AIChat示例项目。左上点击文件—打开项目,选择前面克隆的示例代码目录Cangjie-Examples下的AIChat打开。修改配置config.json配置文件,配置DeepSeek的API信息。配置项:model、model、base_url{ "model": "DeepSeek-V3", "api_key": "Your KEY", "base_url": "[https://maas-cn-southwest-2.modelarts-maas.com/v1/infers/8a062fd4-7367-4ab4-a936-5eeb8c4/v1/chat/completions](https://maas-cn-southwest-2.modelarts-maas.com/v1/infers/8a062fd4-7367-4ab4-a936-5eeb8fb821c4/v1/chat/completions)", "system_prompt": "模拟课堂讨论 - Simulated Classroom Discussion,I need you to help me memorize the noun explanation, after I type a noun, you will simulate 5 students in the class to generate their speeches about the noun. The discussion must be humorous, and easy to understand. The entire conversation and instructions should be provided in Chinese. The first term is: 主题是" } 打开main.cj,运行项目,我们就可跟AI助手进行聊天了。(* 注意:打开main.cj可能提示Console标红,不影响程序运行,可以忽略。亦可以删除import std.console.Console重新编写该导入语句)至此,基于仓颉编程语言+DeepSeek实现模拟课堂讨论助手开发就完成了。接下来我们在聊天项目基础上,改编成“模拟课堂讨论助手”助手,有时候英文提示词会更有效其实超级简单,我们只需要再修改 “system_prompt” 即可。修改内容如下:"system_prompt": "模拟课堂讨论 - Simulated Classroom Discussion,I need you to help me memorize the noun explanation, after I type a noun, you will simulate 5 students in the class to generate their speeches about the noun. The discussion must be humorous, and easy to understand. The entire conversation and instructions should be provided in Chinese. The first term is: 主题是" 运行程序:最后我把代码完整的给大家呈现下(chat.cj):所有代码开源在git clone [https://gitcode.com/weixin_41024010/HUAWEIDEMO.git](https://gitcode.com/weixin_41024010/HUAWEIDEMO.git) 大家可以克隆我的代码使用。使用时要根目录是CHATDEEPSEEK (就是如图所示)不然编译会有问题。具体代码如下:package openai_chat import encoding.json.stream.* import net.http.ClientBuilder import net.http.HttpHeaders import net.http.HttpRequestBuilder import net.tls.TlsClientConfig import net.tls.CertificateVerifyMode import std.collection.ArrayList import std.io.ByteArrayStream import std.time.Duration public struct FunctionCall <: JsonDeserializable<FunctionCall> & JsonSerializable { public let name: String public let arguments: String public init(name: String, arguments: String) { this.name = name this.arguments = arguments } public static func fromJson(r: JsonReader): FunctionCall { var temp_name: String = "" var temp_arguments: String = "" while (let Some(v) <- r.peek()) { match(v) { case BeginObject => r.startObject() while(r.peek() != EndObject) { let n = r.readName() match (n) { case "name" => temp_name = r.readValue<String>() case "arguments" => temp_arguments = r.readValue<String>() case _ => r.skip() } } r.endObject() break case _ => throw Exception("can't deserialize for FunctionCall") } } return FunctionCall(temp_name, temp_arguments) } public func toJson(w: JsonWriter) { w.startObject() w.writeName("name").writeValue<String>(this.name) w.writeName("arguments").writeValue<String>(this.arguments) w.endObject() w.flush() } } public enum RoleType { User | Assistant | System | Function | NULL } public func role_type_to_str(role: RoleType): Option<String> { return match(role) { case RoleType.User => Some("user") case RoleType.Assistant => Some("assistant") case RoleType.System => Some("system") case RoleType.Function => Some("function") case RoleType.NULL => None } } public func str_to_role_type(role_option_str: Option<String>): RoleType { return match(role_option_str) { case Some(role_option) => match (role_option) { case "user" => RoleType.User case "assistant" => RoleType.Assistant case "system" => RoleType.System case "function" => RoleType.Function case x => throw Exception("unknow enum ${x} for RoleType") } case None => RoleType.NULL } } public struct Message<: JsonDeserializable<Message> & JsonSerializable { public let role: RoleType public var content: String public let function_call: Option<FunctionCall> public init(role: RoleType, content: String) { this.role = role this.content = content this.function_call = None } public init(role: RoleType, content: String, function_call: Option<FunctionCall>) { this.role = role this.content = content this.function_call = function_call } public static func fromJson(r: JsonReader): Message { var temp_role: Option<String> = None // role may null in stream chat var temp_content: String = "" var temp_functional_call: Option<FunctionCall> = None while (let Some(v) <- r.peek()) { match(v) { case BeginObject => r.startObject() while(r.peek() != EndObject) { let n = r.readName() match (n) { case "role" => temp_role = r.readValue<Option<String>>() case "content" => temp_content = r.readValue<String>() case "function_call" => temp_functional_call = r.readValue<Option<FunctionCall>>() case _ => r.skip() } } r.endObject() break case _ => throw Exception("can't deserialize for Message") } } let role_type: RoleType = str_to_role_type(temp_role) return Message(role_type, temp_content) } public func toJson(w: JsonWriter) { w.startObject() w.writeName("role").writeValue<Option<String>>(role_type_to_str(this.role)) w.writeName("content").writeValue<String>(this.content) w.endObject() w.flush() } } public struct ChatRequest <: JsonSerializable { private let model: String private let messages: ArrayList<Message> private let max_tokens: Int64 private let temperature: Float64 private let top_p: Float64 private let n: Int32 private let stream: Bool public init( model: String, messages: ArrayList<Message>, max_tokens: Int64, temperature: Float64, top_p: Float64, n: Int32, stream: Bool ) { // construction function with messages this.model = model this.messages = messages this.max_tokens = max_tokens this.temperature = temperature this.top_p = top_p this.n = n this.stream = stream } public init( model: String, prompt: String, history: ArrayList<(String, String)>, system_prompt: String, max_tokens: Int64, temperature: Float64, top_p: Float64, n: Int32, stream: Bool ){ // construction function with prompt and system_prompt this.model = model this.messages = ArrayList<Message>([ Message(RoleType.System, system_prompt) ]) for ((use_msg, bot_msg) in history) { this.messages.append(Message(RoleType.User, use_msg)) this.messages.append(Message(RoleType.Assistant, bot_msg)) } this.messages.append(Message(RoleType.User, prompt)) this.max_tokens = max_tokens this.temperature = temperature this.top_p = top_p this.n = n this.stream = stream } public init( model: String, prompt: String, history: ArrayList<(String, String)>, system_prompt: String, stream: Bool ){ // construction function with prompt and default arguments this.model = model this.messages = ArrayList<Message>([ Message(RoleType.System, system_prompt) ]) for ((use_msg, bot_msg) in history) { this.messages.append(Message(RoleType.User, use_msg)) this.messages.append(Message(RoleType.Assistant, bot_msg)) } this.messages.append(Message(RoleType.User, prompt)) this.max_tokens = 2000 this.temperature = 0.7 this.top_p = 1.0 this.n = 1 this.stream = stream } public func toJson(w: JsonWriter) { w.startObject() w.writeName("model").writeValue<String>(this.model) w.writeName("messages").writeValue<ArrayList<Message>>(this.messages) w.writeName("max_tokens").writeValue<Int64>(this.max_tokens) w.writeName("temperature").writeValue<Float64>(this.temperature) w.writeName("top_p").writeValue<Float64>(this.top_p) w.writeName("n").writeValue<Int32>(this.n) w.writeName("stream").writeValue<Bool>(this.stream) w.endObject() w.flush() } } public struct Choice <: JsonDeserializable<Choice> & JsonSerializable { public let index: Int32 public let message: Option<Message> public let delta: Option<Message> public let finish_reason: Option<String> public let logprobs: Option<Float64> // dashscope for qwen need public init( index: Int32, message: Option<Message>, delta: Option<Message>, finish_reason: Option<String>, logprobs: Option<Float64> ) { this.index = index this.message = message this.delta = delta this.finish_reason = finish_reason this.logprobs = logprobs } public static func fromJson(r: JsonReader): Choice { var temp_index: Int32 = -1 var temp_message: Option<Message> = None var temp_delta: Option<Message> = None var temp_finish_reason: Option<String> = None var temp_logprobs: Option<Float64> = None while (let Some(v) <- r.peek()) { match(v) { case BeginObject => r.startObject() while(r.peek() != EndObject) { let n = r.readName() match (n) { case "index" => temp_index = r.readValue<Int32>() case "message" => temp_message = r.readValue<Option<Message>>() case "delta" => temp_delta = r.readValue<Option<Message>>() case "finish_reason" => temp_finish_reason = r.readValue<Option<String>>() case "logprobs" => temp_logprobs = r.readValue<Option<Float64>>() case _ => r.skip() } } r.endObject() break case _ => throw Exception("can't deserialize for Choice") } } return Choice(temp_index, temp_message, temp_delta, temp_finish_reason, temp_logprobs) } public func toJson(w: JsonWriter) { w.startObject() w.writeName("index").writeValue<Int32>(this.index) w.writeName("message").writeValue<Option<Message>>(this.message) w.writeName("delta").writeValue<Option<Message>>(this.delta) w.writeName("finish_reason").writeValue<Option<String>>(this.finish_reason) w.writeName("logprobs").writeValue<Option<Float64>>(this.logprobs) w.endObject() w.flush() } } public struct Usage <: JsonDeserializable<Usage> & JsonSerializable { public let prompt_tokens: UInt64 public let completion_tokens: UInt64 public let total_tokens: UInt64 public init(prompt_tokens: UInt64, completion_tokens: UInt64, total_tokens: UInt64) { this.prompt_tokens = prompt_tokens this.completion_tokens = completion_tokens this.total_tokens = total_tokens } public static func fromJson(r: JsonReader): Usage { var temp_prompt_tokens: UInt64 = 0 var temp_completion_tokens: UInt64 = 0 var temp_total_tokens: UInt64 = 0 while (let Some(v) <- r.peek()) { match(v) { case BeginObject => r.startObject() while(r.peek() != EndObject) { let n = r.readName() match (n) { case "prompt_tokens" => temp_prompt_tokens = r.readValue<UInt64>() case "completion_tokens" => temp_completion_tokens = r.readValue<UInt64>() case "total_tokens" => temp_total_tokens = r.readValue<UInt64>() case _ => r.skip() } } r.endObject() break case _ => throw Exception("can't deserialize for Usage") } } return Usage(temp_prompt_tokens, temp_completion_tokens, temp_total_tokens) } public func toJson(w: JsonWriter) { w.startObject() w.writeName("prompt_tokens").writeValue<UInt64>(this.prompt_tokens) w.writeName("completion_tokens").writeValue<UInt64>(this.completion_tokens) w.writeName("total_tokens").writeValue<UInt64>(this.total_tokens) w.endObject() w.flush() } } public struct ChatResponse <: JsonDeserializable<ChatResponse> { // some api names `id`, and some names `request_id` public let id: Option<String> public let request_id: Option<String> public let system_fingerprint: Option<String> public let model: String public let object: String public let created: UInt64 public let choices: ArrayList<Choice> public let usage: Option<Usage> public init( id: Option<String>, request_id: Option<String>, system_fingerprint: Option<String>, model: String, object: String, created: UInt64, choices: ArrayList<Choice>, usage: Option<Usage> ) { this.id = id this.request_id = request_id this.system_fingerprint = system_fingerprint this.model = model this.object = object this.created = created this.choices = choices this.usage = usage } public static func fromJson(r: JsonReader): ChatResponse { var temp_id: Option<String> = None var temp_request_id: Option<String> = None var temp_system_fingerprint: Option<String> = None var temp_model: String = "" var temp_object: String = "" var temp_created: UInt64 = 0 var temp_choices: ArrayList<Choice> = ArrayList<Choice>([]) var temp_usage: Option<Usage> = None while (let Some(v) <- r.peek()) { match(v) { case BeginObject => r.startObject() while(r.peek() != EndObject) { let n = r.readName() match (n) { case "id" => temp_id = r.readValue<Option<String>>() case "request_id" => temp_request_id = r.readValue<Option<String>>() case "system_fingerprint" => temp_system_fingerprint = r.readValue<Option<String>>() case "model" => temp_model = r.readValue<String>() case "object" => temp_object = r.readValue<String>() case "created" => temp_created = r.readValue<UInt64>() case "choices" => temp_choices = r.readValue<ArrayList<Choice>>() case "usage" => temp_usage = r.readValue<Option<Usage>>() case _ => r.skip() } } r.endObject() break case _ => throw Exception("can't deserialize for ChatResponse") } } return ChatResponse( temp_id, temp_request_id, temp_system_fingerprint, temp_model, temp_object, temp_created, temp_choices, temp_usage ) } } public func get_domain( url: String ): String { var temp_url = url if (temp_url.startsWith("https://")) { temp_url = temp_url["https://".size..] } else if (temp_url.startsWith("http://")) { temp_url = temp_url["http://".size..] } let domain: String = temp_url.split("?")[0].split("/")[0] return domain } public func build_http_client( prompt: String, env_info: EnvInfo, history: ArrayList<(String, String)>, stream!: Bool ){ // prepare input data var array_stream = ByteArrayStream() let json_writer = JsonWriter(array_stream) let chat_res = ChatRequest( env_info.model, prompt, history, env_info.system_prompt, stream ) chat_res.toJson(json_writer) let post_data: Array<UInt8> = array_stream.readToEnd() var headers: HttpHeaders = HttpHeaders() headers.add("Authorization", "Bearer ${env_info.api_key}") headers.add("Content-Type", "application/json") if (stream) { headers.add("Accept", "text/event-stream") } let request = HttpRequestBuilder() .url(env_info.base_url) .method("POST") .body(post_data) .readTimeout(Duration.second * 120) .addHeaders(headers) .build() let client = if (env_info.base_url.startsWith("https")) { var tls_client_config = TlsClientConfig() tls_client_config.verifyMode = CertificateVerifyMode.TrustAll tls_client_config.domain = get_domain(env_info.base_url) ClientBuilder() .tlsConfig(tls_client_config) .build() } else { ClientBuilder().build() } return (request, client) } public func chat( prompt: String, env_info: EnvInfo, history: ArrayList<(String, String)> ): Option<String> { let (request, client) = build_http_client( prompt, env_info, history, stream: false ) var result_message: Option<String> = None var res_text = "" try { // call api let response = client.send( request ) // read result (support max revice 100k data) let buffer = Array<Byte>(102400, item: 0) let length = response.body.read(buffer) res_text = String.fromUtf8(buffer[..length]) // println("res_text: ${res_text}") var input_stream = ByteArrayStream() input_stream.write(res_text.toArray()) // convert text to ChatResponse object let json_reader = JsonReader(input_stream) let res_object = ChatResponse.fromJson(json_reader) let choices: ArrayList<Choice> = res_object.choices if (choices.size > 0) { let message = choices[0].message.getOrThrow() // println("message: ${message.content}") result_message = Some(message.content) } else { println("can't found any response") } } catch (e: Exception) { println("ERROR: ${e.message}, reviced text is ${res_text}") } client.close() return result_message } public func stream_chat( prompt: String, env_info: EnvInfo, history: ArrayList<(String, String)> ): Option<String> { let (request, client) = build_http_client( prompt, env_info, history, stream: true ) var result_response: String = "" var temp_text2 = "" try { // call api let response = client.send( request ) // read result let buffer = Array<Byte>(10240, item: 0) var finish_reason: Option<String> = None while(finish_reason.isNone() && temp_text2 != "[DONE]") { let length = response.body.read(buffer) let res_text = String.fromUtf8(buffer[..length]) for (temp_text in res_text.split("\n")) { temp_text2 = if (temp_text.startsWith("data: ")) { temp_text["data: ".size..] } else { temp_text } if (temp_text2.size == 0) { continue } if (temp_text2 == "[DONE]") { break } // println("========================") // println("temp_text: ${temp_text2}") // println("========================") var input_stream = ByteArrayStream() input_stream.write(temp_text2.toArray()) // convert text to ChatResponse object let json_reader = JsonReader(input_stream) let res_object = ChatResponse.fromJson(json_reader) let choices: ArrayList<Choice> = res_object.choices if (choices.size > 0) { finish_reason = choices[0].finish_reason if (finish_reason.isNone()) { let delta = choices[0].delta.getOrThrow() print("${delta.content}", flush: true) result_response += delta.content } } else { println("can't found any response") } } } } catch (e: Exception) { println("ERROR: ${e.message}, reviced text is ${temp_text2}") } client.close() if (result_response.size > 0) { return Some(result_response) } else { return None } } 5. 运行和测试最后我们测试下DeepSeek,如下图:到此整个项目案例创建完成,可见是非常简单且实用的,希望能给大家带来帮助。我正在参加【案例共创】第4期 基于华为开发者空间+仓颉/DeepSeek/MCP完成应用构建开发实践 cid:link_1参考链接https://gitcode.com/weixin_41024010/HUAWEIDEMOhttps://gitcode.com/Cangjie/Cangjie-Examples
  • [互动交流] CodeArts IDE for Java 安装后,第一次打开显示 Initialization failed, due to no login extension found in this IDE. The CodeArts IDE
    无法正常打开使用IDE