-
【功能模块】使用华为的移植手册进行hive-1.1.0的编译 【操作步骤&问题现象】具体步骤参考:https://support.huaweicloud.com/prtg-cdh-kunpengbds/kunpenghivecdh5121_02_0009.html【截图信息】【日志信息】(可选,上传日志内容或者附件)
-
1. 问题现象Hive执行msck repair table table_name报错:FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask (state=08S01,code=1)。2. 问题定位查看HiveServer日志/var/log/Bigdata/hive/hiveserver/hive.log,发现目录名不符合分区格式:3. 解决方案(1)删除错误的文件或目录。(2)set hive.msck.path.validation=skip,跳过无效的目录。(3)set hive.msck.path.validation=ignore,跳过校验(不推荐)。
-
1. 问题现象Hive查询ORC文件报错:Error: java.io.IOException: java.io.EOFException: Read past end of RLE integer from compressed stream Stream for column 2 kind LENGTH position: 6 length: 6 range: 0 offset: 16 limit: 16 range 0 = 0 to 6 uncompressed: 3 to 3。2. 问题定位(1)查看HiveServer日志/var/log/Bigdata/hive/hiveserver/hive.log:(2)默认值supports.orc.different.field.names=true会按表字段数量匹配ORC文件数据,当表字段数量与ORC文件数据一致时能支持显示不同字段名的数据,但是当表字段比orc文件多时,字段类型不匹配会报错。3. 解决方案(1)重新建表匹配ORC文件。(2)set hive.supports.orc.different.field.names=false,关闭支持不同字段名,表字段名在ORC文件中不存在显示null。
-
1. 问题现象Hive外置元数据库MySQL,创建表包含中文字段名,报错Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Add request failed : INSERT INTO "COLUMNS_V2" ("CD_ID","COMMENT","COLUMN_NAME","TYPE_NAME","INTEGER_IDX") VALUES (?,?,?,?,?) ) (state=08S01,code=1)。2. 问题定位查看MySQL中columns_v2表字符集,发现是latin1。3. 解决方案在MySQL中执行命令:alter table columns_v2 convert to character set utf8;。
-
MRS3.0.2版本 hive 执行sql报错, 麻烦专家能帮忙看一下是哪里的问题吗?hive日志:WARN : Shutting down task : Stage-11:MAPREDERROR : Ended Job = job_1616739725962_0005 with exception 'org.apache.hadoop.yarn.exceptions.YarnRuntimeException(java.lang.InterruptedException: sleep interrupted)'org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.lang.InterruptedException: sleep interruptedat org.apache.hadoop.mapred.ClientServiceDelegate.invoke(ClientServiceDelegate.java:350)at org.apache.hadoop.mapred.ClientServiceDelegate.getTaskCompletionEvents(ClientServiceDelegate.java:398)at org.apache.hadoop.mapred.YARNRunner.getTaskCompletionEvents(YARNRunner.java:904)at org.apache.hadoop.mapreduce.Job$6.run(Job.java:736)at org.apache.hadoop.mapreduce.Job$6.run(Job.java:733)at java.security.AccessController.doPrivileged(Native Method)at javax.security.auth.Subject.doAs(Subject.java:422)at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1737)at org.apache.hadoop.mapreduce.Job.getTaskCompletionEvents(Job.java:733)at org.apache.hadoop.mapred.JobClient$NetworkedJob.getTaskCompletionEvents(JobClient.java:355)at org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.computeReducerTimeStatsPerJob(HadoopJobExecHelper.java:634)at org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:592)at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:650)at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:149)at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:205)at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:78)Caused by: java.lang.InterruptedException: sleep interruptedat java.lang.Thread.sleep(Native Method)at org.apache.hadoop.mapred.ClientServiceDelegate.invoke(ClientServiceDelegate.java:347)... 16 moreYARN日志截取:2021-03-26 14:41:56,056 | ERROR | Thread-66 | Could not deallocate container for task attemptId attempt_1616739725962_0004_r_000000_0 | RMContainerAllocator.java:4202021-03-26 14:41:56,056 | INFO | Thread-66 | Processing the event EventType: CONTAINER_DEALLOCATE | RMContainerAllocator.java:4042021-03-26 14:41:56,056 | ERROR | Thread-66 | Could not deallocate container for task attemptId attempt_1616739725962_0004_r_000001_0 | RMContainerAllocator.java:4202021-03-26 14:41:56,056 | INFO | Thread-66 | Processing the event EventType: CONTAINER_DEALLOCATE | RMContainerAllocator.java:4042021-03-26 14:41:56,056 | ERROR | Thread-66 | Could not deallocate container for task attemptId attempt_1616739725962_0004_r_000002_0 | RMContainerAllocator.java:4202021-03-26 14:41:56,056 | INFO | Thread-66 | Processing the event EventType: CONTAINER_DEALLOCATE | RMContainerAllocator.java:4042021-03-26 14:41:56,056 | ERROR | Thread-66 | Could not deallocate container for task attemptId attempt_1616739725962_0004_r_000003_0 | RMContainerAllocator.java:4202021-03-26 14:41:56,056 | INFO | Thread-66 | Processing the event EventType: CONTAINER_DEALLOCATE | RMContainerAllocator.java:4042021-03-26 14:41:56,056 | ERROR | Thread-66 | Could not deallocate container for task attemptId attempt_1616739725962_0004_r_000004_0 | RMContainerAllocator.java:4202021-03-26 14:41:56,056 | INFO | Thread-66 | Processing the event EventType: CONTAINER_DEALLOCATE | RMContainerAllocator.java:4042021-03-26 14:41:56,056 | ERROR | Thread-66 | Could not deallocate container for task attemptId attempt_1616739725962_0004_r_000005_0 | RMContainerAllocator.java:4202021-03-26 14:41:56,056 | INFO | Thread-66 | Processing the event EventType: CONTAINER_DEALLOCATE | RMContainerAllocator.java:4042021-03-26 14:41:56,056 | ERROR | Thread-66 | Could not deallocate container for task attemptId attempt_1616739725962_0004_r_000006_0 | RMContainerAllocator.java:4202021-03-26 14:41:56,056 | INFO | Thread-66 | Processing the event EventType: CONTAINER_DEALLOCATE | RMContainerAllocator.java:4042021-03-26 14:41:56,056 | ERROR | Thread-66 | Could not deallocate container for task attemptId attempt_1616739725962_0004_r_000007_0 | RMContainerAllocator.java:4202021-03-26 14:41:56,056 | INFO | Thread-66 | Processing the event EventType: CONTAINER_DEALLOCATE | RMContainerAllocator.java:4042021-03-26 14:41:56,056 | ERROR | Thread-66 | Could not deallocate container for task attemptId attempt_1616739725962_0004_r_000008_0 | RMContainerAllocator.java:4202021-03-26 14:42:02,002 | INFO | AsyncDispatcher event handler | Num completed Tasks: 40 | JobImpl.java:20212021-03-26 14:42:02,002 | INFO | Socket Reader #4 for port 27102 | Socket Reader #4 for port 27102: readAndProcess from client 10.114.10.75:45628 threw exception [java.io.IOException: Connection reset by peer] | Server.java:1383java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:197) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:377) at org.apache.hadoop.ipc.Server.channelRead(Server.java:3486) at org.apache.hadoop.ipc.Server.access$2700(Server.java:140) at org.apache.hadoop.ipc.Server$Connection.readAndProcess(Server.java:2173) at org.apache.hadoop.ipc.Server$Listener.doRead(Server.java:1376) at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:1232) at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:1203)2021-03-26 14:42:02,002 | INFO | AsyncDispatcher event handler | Num completed Tasks: 41 | JobImpl.java:20212021-03-26 14:42:02,002 | INFO | RMCommunicator Allocator | Received completed container container_e06_1616739725962_0005_01_000014 | RMContainerAllocator.java:9112021-03-26 14:42:02,002 | INFO | AsyncDispatcher event handler | Diagnostics report from attempt_1616739725962_0005_m_000053_0: [2021-03-26 14:42:02.336]Container killed by the ApplicationMaster.[2021-03-26 14:42:02.617]Container killed on request. Exit code is 143[2021-03-26 14:42:02.678]Container exited with a non-zero exit code 143.| TaskAttemptImpl.java:26042021-03-26 14:42:02,002 | INFO | RMCommunicator Allocator | After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:25 AssignedReds:0 CompletedMaps:77 CompletedReds:0 ContAlloc:69 ContRel:0 HostLocal:69 RackLocal:0 | RMContainerAllocator.java:16672021-03-26 14:42:02,002 | INFO | AsyncDispatcher event handler | Diagnostics report from attempt_1616739725962_0005_m_000003_0: [2021-03-26 14:42:01.253]Container killed by the ApplicationMaster.[2021-03-26 14:42:01.458]Container killed on request. Exit code is 143[2021-03-26 14:42:01.889]Container exited with a non-zero exit code 143.| TaskAttemptImpl.java:26042021-03-26 14:42:04,004 | INFO | RMCommunicator Allocator | Received completed container container_e06_1616739725962_0006_01_000015 | RMContainerAllocator.java:9112021-03-26 14:42:04,004 | ERROR | RMCommunicator Allocator | Container complete event for unknown container container_e06_1616739725962_0006_01_000015 | RMContainerAllocator.java:9192021-03-26 14:42:04,004 | INFO | RMCommunicator Allocator | Received completed container container_e06_1616739725962_0006_01_000018 | RMContainerAllocator.java:9112021-03-26 14:42:04,004 | ERROR | RMCommunicator Allocator | Container complete event for unknown container container_e06_1616739725962_0006_01_000018 | RMContainerAllocator.java:9192021-03-26 14:42:04,004 | INFO | RMCommunicator Allocator | Received completed container container_e06_1616739725962_0006_01_000020 | RMContainerAllocator.java:9112021-03-26 14:42:04,004 | ERROR | RMCommunicator Allocator | Container complete event for unknown container container_e06_1616739725962_0006_01_000020 | RMContainerAllocator.java:9192021-03-26 14:42:04,004 | INFO | RMCommunicator Allocator | Got allocated containers 16 | RMContainerAllocator.java:1186
-
我们项目组想把上游数据湖的hive数据接入到manas平台的hive里,不知用什么工具或平台对接,请高人指点。
-
搭建Hive集群时,模板安装没有勾选Mapreduce组件,导致未添加Mapreduce组件。解决方法卸载集群。依次卸载在数据节点、控制节点、备管理节点、主管理节点,执行如下命令:sh /opt/huawei/Bigdata/om-agent/nodeagent/setup/uninstall.sh依次卸载备、主管理节点:sh /opt/huawei/Bigdata/om-server/om/inst/uninstall.sh依次卸载数据节点、控制节点、备管理节点、主管理节点挂载的数据盘:/usr/local/diskmgt/script/uninstall.sh -u在配置规划表中勾选DBService、Mapreduce组件(DBService组件需要配置浮动IP),重新安装。
-
【功能模块】Hive-ODBC样例对接【操作步骤&问题现象】1、目前卡在了平台下用odbc连接这块,请问该报错是何种原因导致?2、HiveODBC能否实现Unicode接口,即支持中文?【截图信息】【日志信息】(可选,上传日志内容或者附件)
-
【功能模块】 hive 权限部分【操作步骤&问题现象】 参考MRS组件hive的使用文档,创建新角色,授予管理员权限,使用beeline登录后执行set role admin 报错,操作如下: 1、创建角色: 2、创建用户: 3、登录beeline 获取kerberos权限: beeline登录并执行set role admin 我根据文档去操作,不知道为什么还是不能切换到管理员角色? 文档如下: 网上也有解决这个故障的文章,不过操作步骤跟我的一样,还是有这个问题? https://support.huaweicloud.com/trouble-mrs/mrs_03_0165.html
-
写入Hive写入Hive有两种方式,创建如下python文件,例如文件名为test_hive.py使用spark-submit提交任务spark-submit --master yarn --deploy-mode client --keytab ./user.keytab --principal developuser test_hive.py执行完毕后通过beeline查看hive中表读取Hive基于上面创建的hive表,执行查询sql打印如下
-
直播回放https://bbs.huaweicloud.com/live/cloud_live/202012161900.html内容讲解材料请参考附件实操材料课程名称链接Hive的HCatalog接口调用样例https://bbs.huaweicloud.com/forum/thread-90734-1-1.htmlHive的JDBC接口调用样例https://bbs.huaweicloud.com/forum/thread-90735-1-1.htmlHetu的JDBC接口调用样例https://bbs.huaweicloud.com/forum/thread-90737-1-1.html
-
【功能模块】打包到服务器上运行出现如下错误【操作步骤&问题现象】Could not open client transport with JDBC Uri: jdbc:hive2://xx:24002/hadoop.hadoop.com@HADOOP.COM;user.principal=ws_sjzy_rj;user.keytab=/sjzl/sparkTask/keytab/user.keytab;【截图信息】【日志信息】(可选,上传日志内容或者附件)
-
今日分享单机部署hadoop及hive,可以作为本地调试用途Linux下安装hive(2.3.3)详解及HiveSQL运行 https://bbs.huaweicloud.com/blogs/207331Linux下安装Hadoop(3.1.1)详解及WordCount运行 https://bbs.huaweicloud.com/blogs/207329
Lettle whale 发表于2020-11-13 09:15:59 2020-11-13 09:15:59 最后回复 Lettle whale 2020-11-23 09:35:38
2020 2
上滑加载中
推荐直播
-
GaussDB管理平台TPOPS,DBA高效运维的一站式解决方案
2024/12/24 周二 16:30-18:00
Leo 华为云数据库DTSE技术布道师
数据库的复杂运维,是否让你感到头疼不已?今天,华为云GaussDB管理平台将彻底来改观!本期直播,我们将深入探索GaussDB管理平台的TPOPS功能,带你感受一键式部署安装的便捷,和智能化运维管理的高效,让复杂的运维、管理变得简单,让简单变得可靠。
回顾中 -
DTT年度收官盛典:华为开发者空间大咖汇,共探云端开发创新
2025/01/08 周三 16:30-18:00
Yawei 华为云开发工具和效率首席专家 Edwin 华为开发者空间产品总监
数字化转型进程持续加速,驱动着技术革新发展,华为开发者空间如何巧妙整合鸿蒙、昇腾、鲲鹏等核心资源,打破平台间的壁垒,实现跨平台协同?在科技迅猛发展的今天,开发者们如何迅速把握机遇,实现高效、创新的技术突破?DTT 年度收官盛典,将与大家共同探索华为开发者空间的创新奥秘。
回顾中
热门标签