-
本地连接HBASE,报错:NoNode for /hbase/hbaseid
-
通过Java连接HetuEngine,PasswordFabric模式,报错如下:开发环境没问题,线上报错,不知道是哪里配置有问题,mrs版本3.2.1
-
问题:连接Hive Metastore, 客户端jar 必须要和服务端完全一致吗?能否使用3.1.0-h0.cbu.mrs.320.r48(maven可获取最高版本)版本连接3.1.0-h0.cbu.mrs.320.r77的HMS服务吗现象:我使用3.1.0-h0.cbu.mrs.320.r48 HiveMetaStoreClient连接 3.1.0-h0.cbu.mrs.320.r77 服务端HMS 报错了服务端版本:程序客户端版本、以及异常日志:服务端日志补充
-
HDFS的DataNode在低频率重启过程中,HBase集群的RegionServer WAL写流程,会偶现以下WAL超时卡住错误,如何解决呢:2024-08-26 15:35:13,294 ERROR [RS_CLOSE_REGION-regionserver/cqbs028:60020-1] executor.EventHandler: Caught throwable while processing event M_RS_CLOSE_REGIONjava.lang.RuntimeException: org.apache.hadoop.hbase.exceptions.TimeoutIOException: Failed to get sync result after 300000 ms for txid=818811, WAL system stuck?at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:116)at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104)Caused by: org.apache.hadoop.hbase.exceptions.TimeoutIOException: Failed to get sync result after 300000 ms for txid=818811, WAL system stuck?at org.apache.hadoop.hbase.regionserver.wal.SyncFuture.get(SyncFuture.java:148)at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.blockOnSync(AbstractFSWAL.java:711)at org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.sync(AsyncFSWAL.java:631)at org.apache.hadoop.hbase.regionserver.wal.WALUtil.doFullAppendTransaction(WALUtil.java:158)at org.apache.hadoop.hbase.regionserver.wal.WALUtil.writeMarker(WALUtil.java:136)at org.apache.hadoop.hbase.regionserver.wal.WALUtil.writeRegionEventMarker(WALUtil.java:101)at org.apache.hadoop.hbase.regionserver.HRegion.writeRegionCloseMarker(HRegion.java:1145)at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1684)at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:1501)at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:104)在停止RegionServer的过程中,也有可能会因为WAL卡住,停止RegionServer慢:java.lang.RuntimeException: org.apache.hadoop.hbase.exceptions.TimeoutIOException: Failed to get sync result after 300000 ms for txid=818767, WAL system stuck?at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:116)at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104)at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)at java.base/java.lang.Thread.run(Thread.java:829)Caused by: org.apache.hadoop.hbase.exceptions.TimeoutIOException: Failed to get sync result after 300000 ms for txid=818767, WAL system stuck?at org.apache.hadoop.hbase.regionserver.wal.SyncFuture.get(SyncFuture.java:148)at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.blockOnSync(AbstractFSWAL.java:711)at org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.sync(AsyncFSWAL.java:631)at org.apache.hadoop.hbase.regionserver.wal.WALUtil.doFullAppendTransaction(WALUtil.java:158)at org.apache.hadoop.hbase.regionserver.wal.WALUtil.writeMarker(WALUtil.java:136)at org.apache.hadoop.hbase.regionserver.wal.WALUtil.writeRegionEventMarker(WALUtil.java:101)at org.apache.hadoop.hbase.regionserver.HRegion.writeRegionCloseMarker(HRegion.java:1145)at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1684)at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:1501)at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:104)
-
MRS 白名单开通 3.3.0-LTS版,master节点数据盘为600G,但是数据盘使用率仅为1%左右,有办法降低maste节点数据盘大小吗?
-
提交命令如上,报错如下:使用独立的MRS环境,请问这个问题是还缺少什么依赖么
-
想通过Java代码获取MRS中HBASE的json格式的jmx数据,调用的是https://xxx/20026/HBase/HMaster/126/jmx接口,请求这个接口的时候,会跳转到登录接口,然后会跳转多次,获取的cookie也访问不通这个接口
-
流表配置如下,使用hash结构:flinksql语句,简单统计时间和数量:所得结果:问题:想要使用begin_time作为hash的key,sum作为value该如何配置
-
如何获取 华为FusionInsight MRS二次开发Redis样例代码 redis-examples项目呢?
-
MRS是安全模式,kakfa集群把Ranger鉴权停了也连不上,测试报未知错误,但是kafka在客户端中是可以正常使用的。
-
问题描述:我想通过Manager的REST API采集各组件的日志信息,通过查阅《Manager Rest API接口文档》,可以看到对应的接口信息如下根据接口信息,我构造了如下请求:curl -k -i --basic -u admin:******* -c /tmp/jsessionid.txt -X POST -HContent-type:application/json -d '{"startTime":"2024/06/06 14:00:56","endTime":"2024/06/06 14:10:56","hosts":"10.22.82.53","sources":[{"clusterId":1,"clusterName":"MRSCluster","services":"HDFS","sourceType":"CLUSTER"}]}' 'https://******:28443/web/api/v2/log/gather' 报错信息如下看报错我认为可能是参数哪里不对,可是我的参数都是严格按照接口文档来的,实在不知道哪里有问题。我还做了如下尝试1.把请求信息里的services这个参数直接删除2.把“/api/v2/log/gather/support_services”这个接口返回的service字段的值替换也是报同样的错。附:/api/v2/log/gather/support_services的返回值[{"sourceType":"CLUSTER","clusterId":1,"clusterName":"MRSCluster","services":[{"name":"DBService","componentName":"DBService","displayName":"DBService","modules":[{"name":"DBService","path":"/var/log/Bigdata/dbservice"},{"name":"GaussDB","path":"/var/log/Bigdata/dbservice/DB"},{"name":"HA","path":"/var/log/Bigdata/dbservice/ha"},{"name":"DBServiceAudit","path":"/var/log/Bigdata/audit/dbservice"}]},{"name":"Flink","componentName":"Flink","displayName":"Flink","modules":[{"name":"FlinkResource","path":"/var/log/Bigdata/flink/flinkResource"},{"name":"FlinkServer","path":"/var/log/Bigdata/flink/flinkserver"}]},{"name":"HDFS","componentName":"HDFS","displayName":"HDFS","modules":[{"name":"JournalNode","path":"/var/log/Bigdata/hdfs/jn"},{"name":"DataNode","path":"/var/log/Bigdata/hdfs/dn"},{"name":"NameNode","path":"/var/log/Bigdata/hdfs/nn"},{"name":"Zkfc","path":"/var/log/Bigdata/hdfs/zkfc"},{"name":"Router","path":"/var/log/Bigdata/hdfs/router"},{"name":"JNAudit","path":"/var/log/Bigdata/audit/hdfs/jn"},{"name":"DNAudit","path":"/var/log/Bigdata/audit/hdfs/dn"},{"name":"NNAudit","path":"/var/log/Bigdata/audit/hdfs/nn"},{"name":"ZkfcAudit","path":"/var/log/Bigdata/audit/hdfs/zkfc"},{"name":"RouterAudit","path":"/var/log/Bigdata/audit/hdfs/router"},{"name":"HttpFS","path":"/var/log/Bigdata/hdfs/httpfs"}]},{"name":"Hive","componentName":"Hive","displayName":"Hive","modules":[{"name":"HiveServer","path":"/var/log/Bigdata/hive/hiveserver"},{"name":"HiveServerAudit","path":"/var/log/Bigdata/audit/hive/hiveserver"},{"name":"MetaStore","path":"/var/log/Bigdata/hive/metastore"},{"name":"MetaStoreAudit","path":"/var/log/Bigdata/audit/hive/metastore"},{"name":"WebHCat","path":"/var/log/Bigdata/hive/webhcat"},{"name":"WebHCatAudit","path":"/var/log/Bigdata/audit/hive/webhcat"}]},{"name":"Kafka","componentName":"Kafka","displayName":"Kafka","modules":[{"name":"Broker","path":"/var/log/Bigdata/kafka/broker"},{"name":"KafkaUI","path":"/var/log/Bigdata/kafka/ui"},{"name":"MirrorMaker","path":"/var/log/Bigdata/kafka/mirrormaker"}]},{"name":"KrbServer","componentName":"KrbServer","displayName":"KrbServer","modules":[{"name":"KrbServer","path":"/var/log/Bigdata/kerberos"}]},{"name":"LdapClient","componentName":"LdapClient","displayName":"LdapClient","modules":[{"name":"LdapClient","path":"/var/log/Bigdata/ldapclient"}]},{"name":"LdapServer","componentName":"LdapServer","displayName":"LdapServer","modules":[{"name":"LdapServer","path":"/var/log/Bigdata/ldapserver"}]},{"name":"Mapreduce","componentName":"Mapreduce","displayName":"Mapreduce","modules":[{"name":"JobHistoryServer","path":"/var/log/Bigdata/mapreduce/jobhistory"},{"name":"JHSAudit","path":"/var/log/Bigdata/audit/mapreduce/jobhistory"}]},{"name":"Ranger","componentName":"Ranger","displayName":"Ranger","modules":[{"name":"RangerAdmin","path":"/var/log/Bigdata/ranger/rangeradmin"},{"name":"UserSync","path":"/var/log/Bigdata/ranger/usersync/"},{"name":"TagSync","path":"/var/log/Bigdata/ranger/tagsync/"},{"name":"PolicySync","path":"/var/log/Bigdata/ranger/policysync/"},{"name":"RangerKMS","path":"/var/log/Bigdata/ranger/rangerkms/"}]},{"name":"Spark","componentName":"Spark","displayName":"Spark","modules":[{"name":"JobHistory","path":"/var/log/Bigdata/spark/JobHistory"},{"name":"JobHistoryAudit","path":"/var/log/Bigdata/audit/spark/jobhistory"},{"name":"JDBCServer","path":"/var/log/Bigdata/spark/JDBCServer"},{"name":"JDBCServerAudit","path":"/var/log/Bigdata/audit/spark/jdbcserver"},{"name":"IndexServer","path":"/var/log/Bigdata/spark/IndexServer"},{"name":"IndexServerAudit","path":"/var/log/Bigdata/audit/spark/indexserver"},{"name":"SparkResource","path":"/var/log/Bigdata/spark/SparkResource"}]},{"name":"Yarn","componentName":"Yarn","displayName":"Yarn","modules":[{"name":"ResourceManager","path":"/var/log/Bigdata/yarn/rm"},{"name":"NodeManager","path":"/var/log/Bigdata/yarn/nm"},{"name":"TimelineServer","path":"/var/log/Bigdata/yarn/tls"},{"name":"RMAudit","path":"/var/log/Bigdata/audit/yarn/rm"},{"name":"NMAudit","path":"/var/log/Bigdata/audit/yarn/nm"}]},{"name":"ZooKeeper","componentName":"ZooKeeper","displayName":"ZooKeeper","modules":[{"name":"ZooKeeper","path":"/var/log/Bigdata/zookeeper/quorumpeer"},{"name":"ZKAudit","path":"/var/log/Bigdata/audit/zookeeper/quorumpeer"}]},{"name":"meta","componentName":"meta","displayName":"meta","modules":[{"name":"meta","path":"/var/log/Bigdata/meta"}]}]},{"sourceType":"OMS","clusterId":-1,"clusterName":"Manager","services":[{"name":"Manager","componentName":"Manager","displayName":"Manager","modules":[{"name":"Controller","path":"/var/log/Bigdata/controller"},{"name":"NodeAgent","path":"/var/log/Bigdata/nodeagent"},{"name":"NodeMetricAgent","path":"/var/log/Bigdata/metric_agent"},{"name":"Tomcat","path":"/var/log/Bigdata/tomcat"},{"name":"Httpd","path":"/var/log/Bigdata/httpd"},{"name":"OmsKerberos","path":"/var/log/Bigdata/okerberos"},{"name":"OmsLdapServer","path":"/var/log/Bigdata/oldapserver"},{"name":"OmmServer","path":"/var/log/Bigdata/omm/oms"},{"name":"OmmAgent","path":"/var/log/Bigdata/omm/oma"},{"name":"OmmCore","path":"/var/log/Bigdata/omm/core"},{"name":"Patch","path":"/var/log/Bigdata/patch"},{"name":"Upgrade","path":"/var/log/Bigdata/upgrade"},{"name":"OS","path":"/var/log"},{"name":"OS Statistics","path":"/var/log/osinfo/statistics"},{"name":"OS Performance","path":"/var/log/osperf"},{"name":"Disaster","path":"/var/log/Bigdata/disaster"},{"name":"executor","path":"/var/log/Bigdata/executor"}]}]}]
-
MRS基础组件之 HDFS与MapReduce开发与应用这个实验时长120分钟 我现在进去就只有51分钟 启动资源需要30分钟 根本没有时间做完
-
是按照发出指令的先后顺序还是,按照进入队列的先后顺序,经过我们测试,是按照发出指令的先后顺序。请问本身就是这样设置的吗。难道不应该按照到达时间来进入泊位吗?
-
环境:FusionInsight HD 6513背景: 1. 原集群datanode 机器基本为ARM,且配置较高,设备较新; 2. 现有一批低性能、低配置X86主机,需扩容到集群中;计划:启动HDFS NodeLabel 功能,对HDFS 目录进行打标签,将后扩容主机设置成指定标签目录的主机,以此来规避机器异构可能出现的负载不均等问题。需求: 1. 帮忙确认一下该方案是否可行,是否有更好的方案。 2. 如果此方案可行,是否有需要注意的方向,是否有踩坑案例(越详细越好)可以提供一下。烦请社区的大佬,帮帮忙!
-
1、创建了一个带路由条件的索引curl -XPUT --tlsv1.2 --negotiate -k -u : 'http://ip:24100/indexname?pretty' -H 'Content-Type: application/json' -d '{"settings" : {"number_of_shards" : 2,"number_of_replicas" : 1,"routing_partition_size": 1}, "mappings": {"_routing": { "required": true },"_source": {"enabled": true},"properties": {"name": {"type": "text"},"age": {"type": "integer"}}}}'2、做准备json数据,但完成bulk{"index":{"_id":"1001", "routing" : "1001"}} {"name":"zhangsan","age":20"} {"index":{"_id":"1002", "routing" : "1002"}} {"name":"lisi","age":"30"}curl -XPOST -H 'Content-Type: application/json' 'http://ip:24100/indexname/_bulk?pretty' --data-binary @/json.js3、做查询测试 curl -XGET --tlsv1.2 --negotiate -k -u : 'http://ip:24100/indexname/_doc/1002?routing=A&pretty=true' 问题:这个对于路由routing=A,A是什么意思,换成其它的内容就查不到了?或者说还是json数据定义有问题?
上滑加载中
推荐直播
-
华为云码道-玩转OpenClaw,在线养虾2026/03/11 周三 19:00-21:00
刘昱,华为云高级工程师/谈心,华为云技术专家/李海仑,上海圭卓智能科技有限公司CEO
OpenClaw 火爆开发者圈,华为云码道最新推出 Skill ——开发者只需输入一句口令,即可部署一个功能完整的「小龙虾」智能体。直播带你玩转华为云码道,玩转OpenClaw
回顾中 -
华为云码道-AI时代应用开发利器2026/03/18 周三 19:00-20:00
童得力,华为云开发者生态运营总监/姚圣伟,华为云HCDE开发者专家
本次直播由华为专家带你实战应用开发,看华为云码道(CodeArts)代码智能体如何在AI时代让你的创意应用快速落地。更有华为云HCDE开发者专家带你用码道玩转JiuwenClaw,让小艺成为你的AI助理。
回顾中 -
Skill 构建 × 智能创作:基于华为云码道的 AI 内容生产提效方案2026/03/25 周三 19:00-20:00
余伟,华为云软件研发工程师/万邵业(万少),华为云HCDE开发者专家
本次直播带来两大实战:华为云码道 Skill-Creator 手把手搭建专属知识库 Skill;如何用码道提效 OpenClaw 小说文本,打造从大纲到成稿的 AI 原创小说全链路。技术干货 + OPC创作思路,一次讲透!
回顾中
热门标签