• [云实验室] MRS基础组件之 HDFS与MapReduce开发与应用
    MRS基础组件之 HDFS与MapReduce开发与应用这个实验时长120分钟 我现在进去就只有51分钟 启动资源需要30分钟 根本没有时间做完
  • [区域初赛赛题问题] 请问船在泊位排队时(非同一时间到达,但都处于排队状态),是哪一艘先进入泊位?
    是按照发出指令的先后顺序还是,按照进入队列的先后顺序,经过我们测试,是按照发出指令的先后顺序。请问本身就是这样设置的吗。难道不应该按照到达时间来进入泊位吗?
  • 开启HDFS NodeLabel ,有哪些坑?需要重点注意那块影响?
    环境:FusionInsight HD 6513背景:     1. 原集群datanode 机器基本为ARM,且配置较高,设备较新;     2. 现有一批低性能、低配置X86主机,需扩容到集群中;计划:启动HDFS NodeLabel 功能,对HDFS 目录进行打标签,将后扩容主机设置成指定标签目录的主机,以此来规避机器异构可能出现的负载不均等问题。需求:      1. 帮忙确认一下该方案是否可行,是否有更好的方案。      2. 如果此方案可行,是否有需要注意的方向,是否有踩坑案例(越详细越好)可以提供一下。烦请社区的大佬,帮帮忙!
  • [问题求助] 关于ElasticSearch json数据bulk路由装载疑问
    1、创建了一个带路由条件的索引curl -XPUT --tlsv1.2 --negotiate -k -u : 'http://ip:24100/indexname?pretty' -H 'Content-Type: application/json' -d '{"settings" : {"number_of_shards" : 2,"number_of_replicas" : 1,"routing_partition_size": 1},  "mappings": {"_routing": { "required": true },"_source": {"enabled": true},"properties": {"name": {"type": "text"},"age": {"type": "integer"}}}}'2、做准备json数据,但完成bulk{"index":{"_id":"1001", "routing" : "1001"}} {"name":"zhangsan","age":20"} {"index":{"_id":"1002", "routing" : "1002"}} {"name":"lisi","age":"30"}curl -XPOST -H 'Content-Type: application/json' 'http://ip:24100/indexname/_bulk?pretty' --data-binary @/json.js3、做查询测试     curl -XGET --tlsv1.2 --negotiate -k -u : 'http://ip:24100/indexname/_doc/1002?routing=A&pretty=true'     问题:这个对于路由routing=A,A是什么意思,换成其它的内容就查不到了?或者说还是json数据定义有问题?
  • [问题求助] FusionInsight HD的管理平台上的role 所拥有的权限如何在后台/接口查询?权限粒度到表/视图/topic/目录等。
    需求:       想后台方式批量查找FusionInsight HD的管理平台Manager 上的role 角色,都授权了哪些服务,哪些权限。例, 查询A 角色授权了哪些服务和权限,A权限拥有哪些组件权限,Hive组件,有哪些库权限,哪些表/视图权限;HDFS组件,哪些目录有读写权限等。
  • [问题求助] mrs 提交任务到yarn报错 Could not load service provider for table factories
    mrs 提交任务到yarn报错   Could not load service provider for table factories
  • [问题求助] 请问如何使用mrs flinksql与oceanbase对接
    能否使用flink-sql-connector-oceanbase-cdc 进行连接,请问mrs flink是否支持这么做,如果支持,我该把jar放入那个位置才能生效
  • [问题求助] FusionInsight_HD_8.2.0.1产品,在Flink SQL客户端中select 'hello'报错KeeperErrorCode = ConnectionLoss for /flink_base/flink
    flinkSQL client中select 还是报错的,请帮忙指点下,哪里有问题?谢谢org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.KeeperException$SessionClosedRequireAuthException: KeeperErrorCode = Session closed because client failed to authenticate for /flink_base/flink或者org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /flink_base/flinkzookeeper已经启动,192.168.0.82:24002 ,而且zookeeper中的ACL权限已经设置,但是在设置配额失败[zk: 192.168.0.82:24002(CONNECTED) 5] setquota -n 1000000 /flink_base/flink Insufficient permission : /flink_base/flink tail -f /home/dmp/app/ficlient/Flink/flink/log/flink-root-sql-client-192-168-0-85.log  中的日志如下flink-conf.yaml中的全部配置如下akka.ask.timeout: 120 s akka.client-socket-worker-pool.pool-size-factor: 1.0 akka.client-socket-worker-pool.pool-size-max: 2 akka.client-socket-worker-pool.pool-size-min: 1 akka.framesize: 10485760b akka.log.lifecycle.events: false akka.lookup.timeout: 30 s akka.server-socket-worker-pool.pool-size-factor: 1.0 akka.server-socket-worker-pool.pool-size-max: 2 akka.server-socket-worker-pool.pool-size-min: 1 akka.ssl.enabled: true akka.startup-timeout: 10 s akka.tcp.timeout: 60 s akka.throughput: 15 blob.fetch.backlog: 1000 blob.fetch.num-concurrent: 50 blob.fetch.retries: 50 blob.server.port: 32456-32520 blob.service.ssl.enabled: true classloader.check-leaked-classloader: false classloader.resolve-order: child-first client.rpc.port: 32651-32720 client.timeout: 120 s compiler.delimited-informat.max-line-samples: 10 compiler.delimited-informat.max-sample-len: 2097152 compiler.delimited-informat.min-line-samples: 2 env.hadoop.conf.dir: /home/dmp/app/ficlient/Flink/flink/conf env.java.opts.client: -Djava.io.tmpdir=/home/dmp/app/ficlient/Flink/tmp env.java.opts.jobmanager: -Djava.security.krb5.conf=/opt/huawei/Bigdata/common/runtime/krb5.conf -Djava.io.tmpdir=${PWD}/tmp -Des.security.indication=true env.java.opts.taskmanager: -Djava.security.krb5.conf=/opt/huawei/Bigdata/common/runtime/krb5.conf -Djava.io.tmpdir=${PWD}/tmp -Des.security.indication=true env.java.opts: -Xloggc:<LOG_DIR>/gc.log -XX:+PrintGCDetails -XX:-OmitStackTraceInFastThrow -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=20 -XX:GCLogFileSize=20M -Djdk.tls.ephemeralDHKeySize=3072 -Djava.library.path=${HADOOP_COMMON_HOME}/lib/native -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv6Addresses=false -Dbeetle.application.home.path=/opt/huawei/Bigdata/common/runtime/security/config -Dwcc.configuration.path=/opt/huawei/Bigdata/common/runtime/security/config -Dscc.configuration.path=/opt/huawei/Bigdata/common/runtime/securityforscc/config -Dscc.bigdata.common=/opt/huawei/Bigdata/common/runtime env.yarn.conf.dir: /home/dmp/app/ficlient/Flink/flink/conf flink.security.enable: true flinkserver.alarm.cert.skip: true flinkserver.host.ip: fs.output.always-create-directory: false fs.overwrite-files: false heartbeat.interval: 10000 heartbeat.timeout: 120000 high-availability.job.delay: 10 s high-availability.storageDir: hdfs://hacluster/flink/recovery high-availability.zookeeper.client.acl: creator high-availability.zookeeper.client.connection-timeout: 90000 high-availability.zookeeper.client.max-retry-attempts: 5 high-availability.zookeeper.client.retry-wait: 5000 high-availability.zookeeper.client.session-timeout: 90000 high-availability.zookeeper.client.tolerate-suspended-connections: true high-availability.zookeeper.path.root: /flink high-availability.zookeeper.path.under.quota: /flink_base high-availability.zookeeper.quorum: 192.168.0.82:24002,192.168.0.81:24002,192.168.0.80:24002 high-availability.zookeeper.quota.enabled: true high-availability: zookeeper job.alarm.enable: true jobmanager.heap.size: 1024mb jobmanager.web.403-redirect-url: https://192.168.0.82:28443/web/pages/error/403.html jobmanager.web.404-redirect-url: https://192.168.0.82:28443/web/pages/error/404.html jobmanager.web.415-redirect-url: https://192.168.0.82:28443/web/pages/error/415.html jobmanager.web.500-redirect-url: https://192.168.0.82:28443/web/pages/error/500.html jobmanager.web.access-control-allow-origin: * jobmanager.web.accesslog.enable: true jobmanager.web.allow-access-address: * jobmanager.web.backpressure.cleanup-interval: 600000 jobmanager.web.backpressure.delay-between-samples: 50 jobmanager.web.backpressure.num-samples: 100 jobmanager.web.backpressure.refresh-interval: 60000 jobmanager.web.cache-directive: no-store jobmanager.web.checkpoints.disable: false jobmanager.web.checkpoints.history: 10 jobmanager.web.expires-time: 0 jobmanager.web.history: 5 jobmanager.web.logout-timer: 600000 jobmanager.web.pragma-value: no-cache jobmanager.web.refresh-interval: 3000 jobmanager.web.ssl.enabled: false jobmanager.web.x-frame-options: DENY library-cache-manager.cleanup.interval: 3600 metrics.internal.query-service.port: 28844-28943 metrics.reporter.alarm.factory.class: com.huawei.mrs.flink.alarm.FlinkAlarmReporterFactory metrics.reporter.alarm.interval: 30 s metrics.reporter.alarm.job.alarm.checkpoint.consecutive.failures.num: 5 metrics.reporter.alarm.job.alarm.failure.restart.rate: 80 metrics.reporter.alarm.job.alarm.task.backpressure.duration: 180 s metrics.reporter: alarm nettyconnector.message.delimiter: $_ nettyconnector.registerserver.topic.storage: /flink/nettyconnector nettyconnector.sinkserver.port.range: 28444-28843 nettyconnector.ssl.enabled: false parallelism.default: 1 query.client.network-threads: 0 query.proxy.network-threads: 0 query.proxy.ports: 32541-32560 query.proxy.query-threads: 0 query.server.network-threads: 0 query.server.ports: 32521-32540 query.server.query-threads: 0 resourcemanager.taskmanager-timeout: 300000 rest.await-leader-timeout: 30000 rest.bind-port: 32261-32325 rest.client.max-content-length: 104857600 rest.connection-timeout: 15000 rest.idleness-timeout: 300000 rest.retry.delay: 3000 rest.retry.max-attempts: 20 rest.server.max-content-length: 104857600 rest.server.numThreads: 4 restart-strategy.failure-rate.delay: 10 s restart-strategy.failure-rate.failure-rate-interval: 60 s restart-strategy.failure-rate.max-failures-per-interval: 1 restart-strategy.fixed-delay.attempts: 3 restart-strategy.fixed-delay.delay: 10 s restart-strategy: none security.cookie: 9477298cd52a3e409ed0bc570bdc795179fcc7c301a1225e22f47fe0a3db47c2 security.enable: true security.kerberos.login.contexts: Client,KafkaClient security.kerberos.login.keytab: security.kerberos.login.principal: security.kerberos.login.use-ticket-cache: true security.networkwide.listen.restrict: true security.ssl.algorithms: TLS_DHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_DHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 security.ssl.enabled: false security.ssl.encrypt.enabled: false security.ssl.key-password: Bapuser@9000 security.ssl.keystore-password: Bapuser@9000 security.ssl.keystore: ssl/flink.keystore security.ssl.protocol: TLSv1.2 security.ssl.rest.enabled: false security.ssl.truststore-password: Bapuser@9000 security.ssl.truststore: ssl/flink.truststore security.ssl.verify-hostname: false slot.idle.timeout: 50000 slot.request.timeout: 300000 state.backend.fs.checkpointdir: hdfs://hacluster/flink/checkpoints state.backend.fs.memory-threshold: 20kb state.backend.incremental: true state.backend: rocksdb state.savepoints.dir: hdfs://hacluster/flink/savepoint task.cancellation.interval: 30000 task.cancellation.timeout: 180000 taskmanager.data.port: 32391-32455 taskmanager.data.ssl.enabled: false taskmanager.debug.memory.logIntervalMs: 0 taskmanager.debug.memory.startLogThread: false taskmanager.heap.size: 1024mb taskmanager.initial-registration-pause: 500 ms taskmanager.max-registration-pause: 30 s taskmanager.maxRegistrationDuration: 5 min taskmanager.memory.fraction: 0.7 taskmanager.memory.off-heap: false taskmanager.memory.preallocate: false taskmanager.memory.segment-size: 32768 taskmanager.network.detailed-metrics: false taskmanager.network.memory.buffers-per-channel: 2 taskmanager.network.memory.floating-buffers-per-gate: 8 taskmanager.network.memory.fraction: 0.1 taskmanager.network.memory.max: 1gb taskmanager.network.memory.min: 64mb taskmanager.network.netty.client.connectTimeoutSec: 300 taskmanager.network.netty.client.numThreads: -1 taskmanager.network.netty.num-arenas: -1 taskmanager.network.netty.sendReceiveBufferSize: 4096 taskmanager.network.netty.server.backlog: 0 taskmanager.network.netty.server.numThreads: -1 taskmanager.network.netty.transport: nio taskmanager.network.numberOfBuffers: 2048 taskmanager.network.request-backoff.initial: 100 taskmanager.network.request-backoff.max: 10000 taskmanager.numberOfTaskSlots: 1 taskmanager.refused-registration-pause: 10 s taskmanager.registration.timeout: 5 min taskmanager.rpc.port: 32326-32390 taskmanager.runtime.hashjoin-bloom-filters: false taskmanager.runtime.max-fan: 128 taskmanager.runtime.sort-spilling-threshold: 0.8 use.path.filesystem: true use.smarterleaderlatch: true web.submit.enable: false web.timeout: 10000 yarn.application-attempt-failures-validity-interval: 600000 yarn.application-attempts: 5 yarn.application-master.port: 32586-32650 yarn.heap-cutoff-min: 384 yarn.heap-cutoff-ratio: 0.25 yarn.heartbeat-delay: 5 yarn.heartbeat.container-request-interval: 500 yarn.maximum-failed-containers: 5 yarn.per-job-cluster.include-user-jar: ORDER zk.ssl.enabled: false zookeeper.clientPort.quorum: 192.168.0.82:24002,192.168.0.81:24002,192.168.0.80:24002 zookeeper.root.acl: OPEN zookeeper.sasl.disable: false zookeeper.sasl.login-context-name: Client zookeeper.sasl.service-name: zookeeper zookeeper.secureClientPort.quorum: 192.168.0.82:24002,192.168.0.81:24002,192.168.0.80:24002 
  • [问题求助] FusionInsight_HD_8.2.0.1产品,在Flink SQL客户端中select 'hello'报错KeeperErrorCode = ConnectionLoss for /flink_base/flink
    flinkSQL client中select 还是报错的,请帮忙指点下,哪里有问题?谢谢org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.KeeperException$SessionClosedRequireAuthException: KeeperErrorCode = Session closed because client failed to authenticate for /flink_base/flink或者org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /flink_base/flinkzookeeper已经启动,192.168.0.82:24002 ,而且zookeeper中的ACL权限已经设置,但是在设置配额失败[zk: 192.168.0.82:24002(CONNECTED) 2] create /flink_base/flink_base Created /flink_base/flink_base [zk: 192.168.0.82:24002(CONNECTED) 3] ls /flink_base/ Path must not end with / character [zk: 192.168.0.82:24002(CONNECTED) 4] ls /flink_base [flink, flink_base] [zk: 192.168.0.82:24002(CONNECTED) 5] [zk: 192.168.0.82:24002(CONNECTED) 5] [zk: 192.168.0.82:24002(CONNECTED) 5] [zk: 192.168.0.82:24002(CONNECTED) 5] setquota -n 1000000 /flink_base/flink Insufficient permission : /flink_base/flink [zk: 192.168.0.82:24002(CONNECTED) 6] getAcl /flink_base/flink 'world,'anyone : cdrwa [zk: 192.168.0.82:24002(CONNECTED) 7] setAcl /flink_base/flink world:anyone:rwcda [zk: 192.168.0.82:24002(CONNECTED) 8] setquota -n 1000000 /flink_base/flink Insufficient permission : /flink_base/flink [zk: 192.168.0.82:24002(CONNECTED) 9] getAcl /flink_base/ Path must not end with / character [zk: 192.168.0.82:24002(CONNECTED) 10] getAcl /flink_base 'world,'anyone : cdrwa [zk: 192.168.0.82:24002(CONNECTED) 11] getAcl /flink_base/flink 'world,'anyone : cdrwa [zk: 192.168.0.82:24002(CONNECTED) 12] ls /zookeeper/quota [beeline, elasticsearch, flink_base, graphbase, hadoop, hadoop-adapter-data, hadoop-flag, hadoop-ha, hbase, hdfs-acl-log, hive, hiveserver2, kafka, loader, mr-ha, rmstore, sparkthriftserver, sparkthriftserver2x, sparkthriftserver2x_sparkInternal_HAMode, yarn-leader-election] [zk: 192.168.0.82:24002(CONNECTED) 13] ls /zookeeper/quota/flink_base [zookeeper_limits, zookeeper_stats] [zk: 192.168.0.82:24002(CONNECTED) 5] setquota -n 1000000 /flink_base/flink Insufficient permission : /flink_base/flink tail -f /home/dmp/app/ficlient/Flink/flink/log/flink-root-sql-client-192-168-0-85.log  中的日志如下flink-conf.yaml中的全部配置如下akka.ask.timeout: 120 s akka.client-socket-worker-pool.pool-size-factor: 1.0 akka.client-socket-worker-pool.pool-size-max: 2 akka.client-socket-worker-pool.pool-size-min: 1 akka.framesize: 10485760b akka.log.lifecycle.events: false akka.lookup.timeout: 30 s akka.server-socket-worker-pool.pool-size-factor: 1.0 akka.server-socket-worker-pool.pool-size-max: 2 akka.server-socket-worker-pool.pool-size-min: 1 akka.ssl.enabled: true akka.startup-timeout: 10 s akka.tcp.timeout: 60 s akka.throughput: 15 blob.fetch.backlog: 1000 blob.fetch.num-concurrent: 50 blob.fetch.retries: 50 blob.server.port: 32456-32520 blob.service.ssl.enabled: true classloader.check-leaked-classloader: false classloader.resolve-order: child-first client.rpc.port: 32651-32720 client.timeout: 120 s compiler.delimited-informat.max-line-samples: 10 compiler.delimited-informat.max-sample-len: 2097152 compiler.delimited-informat.min-line-samples: 2 env.hadoop.conf.dir: /home/dmp/app/ficlient/Flink/flink/conf env.java.opts.client: -Djava.io.tmpdir=/home/dmp/app/ficlient/Flink/tmp env.java.opts.jobmanager: -Djava.security.krb5.conf=/opt/huawei/Bigdata/common/runtime/krb5.conf -Djava.io.tmpdir=${PWD}/tmp -Des.security.indication=true env.java.opts.taskmanager: -Djava.security.krb5.conf=/opt/huawei/Bigdata/common/runtime/krb5.conf -Djava.io.tmpdir=${PWD}/tmp -Des.security.indication=true env.java.opts: -Xloggc:<LOG_DIR>/gc.log -XX:+PrintGCDetails -XX:-OmitStackTraceInFastThrow -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=20 -XX:GCLogFileSize=20M -Djdk.tls.ephemeralDHKeySize=3072 -Djava.library.path=${HADOOP_COMMON_HOME}/lib/native -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv6Addresses=false -Dbeetle.application.home.path=/opt/huawei/Bigdata/common/runtime/security/config -Dwcc.configuration.path=/opt/huawei/Bigdata/common/runtime/security/config -Dscc.configuration.path=/opt/huawei/Bigdata/common/runtime/securityforscc/config -Dscc.bigdata.common=/opt/huawei/Bigdata/common/runtime env.yarn.conf.dir: /home/dmp/app/ficlient/Flink/flink/conf flink.security.enable: true flinkserver.alarm.cert.skip: true flinkserver.host.ip: fs.output.always-create-directory: false fs.overwrite-files: false heartbeat.interval: 10000 heartbeat.timeout: 120000 high-availability.job.delay: 10 s high-availability.storageDir: hdfs://hacluster/flink/recovery high-availability.zookeeper.client.acl: creator high-availability.zookeeper.client.connection-timeout: 90000 high-availability.zookeeper.client.max-retry-attempts: 5 high-availability.zookeeper.client.retry-wait: 5000 high-availability.zookeeper.client.session-timeout: 90000 high-availability.zookeeper.client.tolerate-suspended-connections: true high-availability.zookeeper.path.root: /flink high-availability.zookeeper.path.under.quota: /flink_base high-availability.zookeeper.quorum: 192.168.0.82:24002,192.168.0.81:24002,192.168.0.80:24002 high-availability.zookeeper.quota.enabled: true high-availability: zookeeper job.alarm.enable: true jobmanager.heap.size: 1024mb jobmanager.web.403-redirect-url: https://192.168.0.82:28443/web/pages/error/403.html jobmanager.web.404-redirect-url: https://192.168.0.82:28443/web/pages/error/404.html jobmanager.web.415-redirect-url: https://192.168.0.82:28443/web/pages/error/415.html jobmanager.web.500-redirect-url: https://192.168.0.82:28443/web/pages/error/500.html jobmanager.web.access-control-allow-origin: * jobmanager.web.accesslog.enable: true jobmanager.web.allow-access-address: * jobmanager.web.backpressure.cleanup-interval: 600000 jobmanager.web.backpressure.delay-between-samples: 50 jobmanager.web.backpressure.num-samples: 100 jobmanager.web.backpressure.refresh-interval: 60000 jobmanager.web.cache-directive: no-store jobmanager.web.checkpoints.disable: false jobmanager.web.checkpoints.history: 10 jobmanager.web.expires-time: 0 jobmanager.web.history: 5 jobmanager.web.logout-timer: 600000 jobmanager.web.pragma-value: no-cache jobmanager.web.refresh-interval: 3000 jobmanager.web.ssl.enabled: false jobmanager.web.x-frame-options: DENY library-cache-manager.cleanup.interval: 3600 metrics.internal.query-service.port: 28844-28943 metrics.reporter.alarm.factory.class: com.huawei.mrs.flink.alarm.FlinkAlarmReporterFactory metrics.reporter.alarm.interval: 30 s metrics.reporter.alarm.job.alarm.checkpoint.consecutive.failures.num: 5 metrics.reporter.alarm.job.alarm.failure.restart.rate: 80 metrics.reporter.alarm.job.alarm.task.backpressure.duration: 180 s metrics.reporter: alarm nettyconnector.message.delimiter: $_ nettyconnector.registerserver.topic.storage: /flink/nettyconnector nettyconnector.sinkserver.port.range: 28444-28843 nettyconnector.ssl.enabled: false parallelism.default: 1 query.client.network-threads: 0 query.proxy.network-threads: 0 query.proxy.ports: 32541-32560 query.proxy.query-threads: 0 query.server.network-threads: 0 query.server.ports: 32521-32540 query.server.query-threads: 0 resourcemanager.taskmanager-timeout: 300000 rest.await-leader-timeout: 30000 rest.bind-port: 32261-32325 rest.client.max-content-length: 104857600 rest.connection-timeout: 15000 rest.idleness-timeout: 300000 rest.retry.delay: 3000 rest.retry.max-attempts: 20 rest.server.max-content-length: 104857600 rest.server.numThreads: 4 restart-strategy.failure-rate.delay: 10 s restart-strategy.failure-rate.failure-rate-interval: 60 s restart-strategy.failure-rate.max-failures-per-interval: 1 restart-strategy.fixed-delay.attempts: 3 restart-strategy.fixed-delay.delay: 10 s restart-strategy: none security.cookie: 9477298cd52a3e409ed0bc570bdc795179fcc7c301a1225e22f47fe0a3db47c2 security.enable: true security.kerberos.login.contexts: Client,KafkaClient security.kerberos.login.keytab: security.kerberos.login.principal: security.kerberos.login.use-ticket-cache: true security.networkwide.listen.restrict: true security.ssl.algorithms: TLS_DHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_DHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 security.ssl.enabled: false security.ssl.encrypt.enabled: false security.ssl.key-password: Bapuser@9000 security.ssl.keystore-password: Bapuser@9000 security.ssl.keystore: ssl/flink.keystore security.ssl.protocol: TLSv1.2 security.ssl.rest.enabled: false security.ssl.truststore-password: Bapuser@9000 security.ssl.truststore: ssl/flink.truststore security.ssl.verify-hostname: false slot.idle.timeout: 50000 slot.request.timeout: 300000 state.backend.fs.checkpointdir: hdfs://hacluster/flink/checkpoints state.backend.fs.memory-threshold: 20kb state.backend.incremental: true state.backend: rocksdb state.savepoints.dir: hdfs://hacluster/flink/savepoint task.cancellation.interval: 30000 task.cancellation.timeout: 180000 taskmanager.data.port: 32391-32455 taskmanager.data.ssl.enabled: false taskmanager.debug.memory.logIntervalMs: 0 taskmanager.debug.memory.startLogThread: false taskmanager.heap.size: 1024mb taskmanager.initial-registration-pause: 500 ms taskmanager.max-registration-pause: 30 s taskmanager.maxRegistrationDuration: 5 min taskmanager.memory.fraction: 0.7 taskmanager.memory.off-heap: false taskmanager.memory.preallocate: false taskmanager.memory.segment-size: 32768 taskmanager.network.detailed-metrics: false taskmanager.network.memory.buffers-per-channel: 2 taskmanager.network.memory.floating-buffers-per-gate: 8 taskmanager.network.memory.fraction: 0.1 taskmanager.network.memory.max: 1gb taskmanager.network.memory.min: 64mb taskmanager.network.netty.client.connectTimeoutSec: 300 taskmanager.network.netty.client.numThreads: -1 taskmanager.network.netty.num-arenas: -1 taskmanager.network.netty.sendReceiveBufferSize: 4096 taskmanager.network.netty.server.backlog: 0 taskmanager.network.netty.server.numThreads: -1 taskmanager.network.netty.transport: nio taskmanager.network.numberOfBuffers: 2048 taskmanager.network.request-backoff.initial: 100 taskmanager.network.request-backoff.max: 10000 taskmanager.numberOfTaskSlots: 1 taskmanager.refused-registration-pause: 10 s taskmanager.registration.timeout: 5 min taskmanager.rpc.port: 32326-32390 taskmanager.runtime.hashjoin-bloom-filters: false taskmanager.runtime.max-fan: 128 taskmanager.runtime.sort-spilling-threshold: 0.8 use.path.filesystem: true use.smarterleaderlatch: true web.submit.enable: false web.timeout: 10000 yarn.application-attempt-failures-validity-interval: 600000 yarn.application-attempts: 5 yarn.application-master.port: 32586-32650 yarn.heap-cutoff-min: 384 yarn.heap-cutoff-ratio: 0.25 yarn.heartbeat-delay: 5 yarn.heartbeat.container-request-interval: 500 yarn.maximum-failed-containers: 5 yarn.per-job-cluster.include-user-jar: ORDER zk.ssl.enabled: false zookeeper.clientPort.quorum: 192.168.0.82:24002,192.168.0.81:24002,192.168.0.80:24002 zookeeper.root.acl: OPEN zookeeper.sasl.disable: false zookeeper.sasl.login-context-name: Client zookeeper.sasl.service-name: zookeeper zookeeper.secureClientPort.quorum: 192.168.0.82:24002,192.168.0.81:24002,192.168.0.80:24002 
  • [教程指导] MRS技术支持材料汇总
    MapReduce服务(MRS)MRS产品文档https://support.huaweicloud.com/mrs/index.htmlMRS视频介绍https://support.huaweicloud.com/mrs_video/index.htmlMRS产品体验https://lab.huaweicloud.com/experiment-list?ticket=ST-82337421-kKgD75jMOxdSAkr1TN3LuLVT-ssoMRS入门实践https://support.huaweicloud.com/qs-mrs/mrs_09_0027.html开发者论坛https://bbs.huaweicloud.com/forum/forum-612-1.html华为云大数据产品职业认证HCIA/HCIP 免费视频学习https://edu.huaweicloud.com/training华为云大数据开发者认证HCCDP 免费视频学习https://edu.huaweicloud.com/certificationindex
  • [生态对接] 使用开源Hive,无法连接mrs hive-metastore
    版本现象使用开源Hive相关依赖(hive-exec、hive-metastore等)无法连接MRS Hive MetaStore 问题1:MRS Hive MetaStore 是否支持外部访问  问题2:如果支持外部访问需要哪些必要操作?(必须强制要MRS Hive相同的依赖?或者其他注意事项?)  其他:我看到一个类似的问题,https://bbs.huaweicloud.com/forum/thread-99927-1-1.html,是属于同一类问题吗? 
  • [问题求助] hudi静态表的timeline文件无法自动归档,导致hdfs小文件过多
    mrs320版本,hudi0.11。场景是静态表离线跑批。使用spark-sql每天向hudi cow表里insert select 0条数据,timeline文件无法archive,导致小文件越来越多希望大佬给个解决方案spark-sql复现步骤如下:--创建源表CREATE TABLE emp_test ( empno int, ename string, job string, mgr int, hiredate string, sal int, comm int, deptno int, tx_date string)using hudioptions( type='cow' ,primaryKey='empno' ,payloadclass='org.apache.hudi.common.model.OverwriteNonDefaultWithLatestAvroPayLoad' ,preCombineField='tx_date' ,hoodie.cleaner.commits.retained='1' ,hoodie.keep.min.commits='2' ,hoodie.keep.max.commits='3' ,hoodie.index.type='SIMPLE');insert into emp_test values(7369,'SMITH','CLERK',7902,'1980-12-17',800,100,20,'2022-11-17'),(7499,'ALLEN','SALESMAN',7698,'1981-02-20',1600,300,30,'2022-11-17'),(5233,'ANDY','DEVELOPER',9192,'1996-05-30',5000,3000,10,'2022-11-13');--创建2表create table emp_test2 using hudioptions ( type='cow' ,primaryKey='empno' ,payloadclass='org.apache.hudi.common.model.OverwriteNonDefaultWithLatestAvroPayLoad' ,preCombineField='tx_date' ,hoodie.cleaner.commits.retained='1' ,hoodie.keep.min.commits='2' ,hoodie.keep.max.commits='3' ,hoodie.index.type='SIMPLE' ) as select * from emp_test where 1<>1;--初始化2表insert into emp_test2 select * from emp_test;--2表每天无新增数据insert into emp_test2 select * from emp_test limit 0;insert into emp_test2 select * from emp_test limit 0;insert into emp_test2 select * from emp_test limit 0;insert into emp_test2 select * from emp_test limit 0;insert into emp_test2 select * from emp_test limit 0;insert into emp_test2 select * from emp_test limit 0;insert into emp_test2 select * from emp_test limit 0;insert into emp_test2 select * from emp_test limit 0;insert into emp_test2 select * from emp_test limit 0;观察hdfs2表/.hoodie下timeline instant文件一直新增,不归档/archived(无归档文件)
  • [技术干货] 基于华为云MRS的Kafka集群实践
    一、MRS流式集群购买按照华为云用户手册(cid:link_0)购买。二、Kafka客户端安装FusionInsight Manager登陆界面下载集群客户端客户端生成成功安装客户端至集群内其他节点解压并校验软件包安装客户端完成三、Kafka消息传递登陆master1节点,并配置环境变量开启IAM用户同步查看并记录一个ZooKeeper角色实例的IP地址创建名为“77”的Kafka topic查看并记录Kafka角色实例中任意一个的IP在topic test中收发消息,内容为姓名全拼(liuchengjie)和学号(201250125)四、Python使用Kafka结果如下,内容为姓名全拼(liuchengjie)和学号(201250125)
  • [问题求助] MRS如何和开源大数据集群建立kerberos互信
    在配置MRS与开源大数据集群互信时,分别在两边创建好用户,配置对端的kerbserver和hdfs,配置mrs界面上的互信并重启后,在两个集群分别进行互信访问时都会报同样的错:[root@bdtl-vm-1652 ~]# hdfs dfs -ls /2023-11-09 09:58:10,209 WARN security.UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 60 seconds before. Last Login=16994950888392023-11-09 09:58:15,102 WARN security.UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 60 seconds before. Last Login=16994950888392023-11-09 09:58:15,719 WARN security.UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 60 seconds before. Last Login=16994950888392023-11-09 09:58:17,278 WARN security.UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 60 seconds before. Last Login=16994950888392023-11-09 09:58:17,728 WARN security.UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 60 seconds before. Last Login=16994950888392023-11-09 09:58:21,389 WARN ipc.Client: Couldn't setup connection for hdpuser@HDP.COM to xx.xx.xx.xx:25000javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Generic error (description in e-text) (60) - PROCESS_TGS)] at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211) at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:410) at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:627) at org.apache.hadoop.ipc.Client$Connection.access$2400(Client.java:418) at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:855) at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:851) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1890) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:851) at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:418) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1694) at org.apache.hadoop.ipc.Client.call(Client.java:1519) at org.apache.hadoop.ipc.Client.call(Client.java:1472) at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:245) at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:131) at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:1008) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:435) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:170) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:162) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:100) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:366) at com.sun.proxy.$Proxy11.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1892) at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1805) at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1802) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1817) at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:115) at org.apache.hadoop.fs.Globber.doGlob(Globber.java:367) at org.apache.hadoop.fs.Globber.glob(Globber.java:205) at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:2196) at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:345) at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:252) at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:235) at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:107) at org.apache.hadoop.fs.shell.Command.run(Command.java:179) at org.apache.hadoop.fs.FsShell.run(FsShell.java:343) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:81) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:95) at org.apache.hadoop.fs.FsShell.main(FsShell.java:410)Caused by: GSSException: No valid credentials provided (Mechanism level: Generic error (description in e-text) (60) - PROCESS_TGS) at sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:772) at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248) at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179) at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192) ... 45 moreCaused by: KrbException: Generic error (description in e-text) (60) - PROCESS_TGS at sun.security.krb5.KrbTgsRep.<init>(KrbTgsRep.java:73) at sun.security.krb5.KrbTgsReq.getReply(KrbTgsReq.java:226) at sun.security.krb5.KrbTgsReq.sendAndGetCreds(KrbTgsReq.java:237) at sun.security.krb5.internal.CredentialsUtil.serviceCredsSingle(CredentialsUtil.java:477) at sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:340) at sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:314) at sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:169) at sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:490) at sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:695) ... 48 moreCaused by: KrbException: Identifier doesn't match expected value (906) at sun.security.krb5.internal.KDCRep.init(KDCRep.java:140) at sun.security.krb5.internal.TGSRep.init(TGSRep.java:65) at sun.security.krb5.internal.TGSRep.<init>(TGSRep.java:60) at sun.security.krb5.KrbTgsRep.<init>(KrbTgsRep.java:55) ... 56 morels: DestHost:destPort xx.xx.xx.xx:25000 , LocalHost:localPort xx.xx.xx.xx:0. Failed on local exception: java.io.IOException: Couldn't setup connection for hdpuser@HDP.COM to xx.xx.xx.xx:25000请问这个是哪里缺少配置吗?
  • [问题求助] 大学应届生该如何进军大数据和云计算技术?
    是多花费时间与精力去学习、考取HCIE-Cloud,bigdata类似的认证还是早点找实习工作通过实际项目积累经验