• [问题求助] MRS如何和开源大数据集群建立kerberos互信
    在配置MRS与开源大数据集群互信时,分别在两边创建好用户,配置对端的kerbserver和hdfs,配置mrs界面上的互信并重启后,在两个集群分别进行互信访问时都会报同样的错:[root@bdtl-vm-1652 ~]# hdfs dfs -ls /2023-11-09 09:58:10,209 WARN security.UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 60 seconds before. Last Login=16994950888392023-11-09 09:58:15,102 WARN security.UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 60 seconds before. Last Login=16994950888392023-11-09 09:58:15,719 WARN security.UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 60 seconds before. Last Login=16994950888392023-11-09 09:58:17,278 WARN security.UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 60 seconds before. Last Login=16994950888392023-11-09 09:58:17,728 WARN security.UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 60 seconds before. Last Login=16994950888392023-11-09 09:58:21,389 WARN ipc.Client: Couldn't setup connection for hdpuser@HDP.COM to xx.xx.xx.xx:25000javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Generic error (description in e-text) (60) - PROCESS_TGS)] at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211) at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:410) at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:627) at org.apache.hadoop.ipc.Client$Connection.access$2400(Client.java:418) at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:855) at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:851) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1890) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:851) at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:418) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1694) at org.apache.hadoop.ipc.Client.call(Client.java:1519) at org.apache.hadoop.ipc.Client.call(Client.java:1472) at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:245) at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:131) at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:1008) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:435) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:170) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:162) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:100) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:366) at com.sun.proxy.$Proxy11.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1892) at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1805) at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1802) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1817) at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:115) at org.apache.hadoop.fs.Globber.doGlob(Globber.java:367) at org.apache.hadoop.fs.Globber.glob(Globber.java:205) at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:2196) at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:345) at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:252) at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:235) at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:107) at org.apache.hadoop.fs.shell.Command.run(Command.java:179) at org.apache.hadoop.fs.FsShell.run(FsShell.java:343) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:81) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:95) at org.apache.hadoop.fs.FsShell.main(FsShell.java:410)Caused by: GSSException: No valid credentials provided (Mechanism level: Generic error (description in e-text) (60) - PROCESS_TGS) at sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:772) at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248) at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179) at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192) ... 45 moreCaused by: KrbException: Generic error (description in e-text) (60) - PROCESS_TGS at sun.security.krb5.KrbTgsRep.<init>(KrbTgsRep.java:73) at sun.security.krb5.KrbTgsReq.getReply(KrbTgsReq.java:226) at sun.security.krb5.KrbTgsReq.sendAndGetCreds(KrbTgsReq.java:237) at sun.security.krb5.internal.CredentialsUtil.serviceCredsSingle(CredentialsUtil.java:477) at sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:340) at sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:314) at sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:169) at sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:490) at sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:695) ... 48 moreCaused by: KrbException: Identifier doesn't match expected value (906) at sun.security.krb5.internal.KDCRep.init(KDCRep.java:140) at sun.security.krb5.internal.TGSRep.init(TGSRep.java:65) at sun.security.krb5.internal.TGSRep.<init>(TGSRep.java:60) at sun.security.krb5.KrbTgsRep.<init>(KrbTgsRep.java:55) ... 56 morels: DestHost:destPort xx.xx.xx.xx:25000 , LocalHost:localPort xx.xx.xx.xx:0. Failed on local exception: java.io.IOException: Couldn't setup connection for hdpuser@HDP.COM to xx.xx.xx.xx:25000请问这个是哪里缺少配置吗?
  • [运维宝典] 延长kadmin/admin用户密码有效期
    因FusionInsight MRS 8.x版本集群的LDAP管理员密码是系统随机生成,所以要进行以下步骤操作。 #登录客户端所在节点,切换omm用户执行 su - omm #1、获取密文 cd /opt/huawei/Bigdata/FusionInsight_BASE_8.1.2.2/1_3_KerberosServer/etc vim ENV_VARS #获取LDAP_SERVER_PASSWD后面的密文  #2、查找到解密脚本 cd /opt/huawei/Bigdata/om-server/om/tools #修改解密脚本(解密完记得要改回去) vim recoveryRealm.sh  #3、执行脚本对LDAP的root用户进行解密 ./recoveryRealm.sh 密文 #获取返回结果即密码  #4、查看ldaps参数信息及域名信息 ps -ef |grep ldap | grep 21750  #5、查看kadmin相关信息 ldapsearch -H ldap://xxx.xx.xx.xx:21750 -LLL -x -D cn=root,dc=hadoop,dc=com -w '第三步获取的密码' -b krbPrincipalName=kadmin/admin@本地域名,cn=本地域名,cn=krbcontainer,dc=hadoop,dc=com #其中krbLastPwdChange是最后一次更改密码时间 #其中krbPasswordExpiration是密码有效期  #6、执行下面语句将密码有效期延长(时间自己定) ldapsearch -H ldap://xxx.xx.xx.xx:21750 -LLL -x -D cn=root,dc=hadoop,dc=com -w '第三步获取的密码' <<EOF dn: krbPrincipalName=kadmin/admin@本地域名,cn=本地域名,cn=krbcontainer,dc=hadoop,dc=com changetype: modify replace: krbPasswordExpiration 从第五步获取的krbPasswordExpiration这一行粘贴到这并修改时间 EOF  #7、在执行第五步看看密码有效期是否更新  #8、可以重新生成keytab文件,使用keytab文件认证kadmin cd /opt/huawei/Bigdata/om-server/om/meta-0.0.1-SNAPSHOT/kerberos/scripts/ ./genkeytab.sh kadmin/admin /home/omm/kadmin.keytab keytab kadmin -p kadmin/admin -kt /home/omm/kadmin.keytab 
  • [生态对接] C#如何使用HttpClient请求华为FusionInsignt平台带认证的ElasticSearch?
    我下载了krb5.conf和user.keytab这两个Kerberos认证所需的文件,C#如何使用HttpClient请求华为FusionInsignt平台带认证的ElasticSearch?
  • [问题求助] Flink 连接 Es Kerberos认证问题
    使用Flink 向 Es写数据,在Sink 端如何做 Kerberos 认证 ?
  • [中间件类] Kafka的kerberos认证又失败了
    (An error: (java.security.PrivilegedActionException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7) - LOOKING_UP_SERVER)]) occurred when evaluating SASL token received from the Kafka Broker. This may be caused by Java's being unable to resolve the Kafka Broker's hostname correctly. You may want to try to adding '-Dsun.net.spi.nameservice.provider.1=dns,sun' to your client's JVMFLAGS environment. Users must configure FQDN of kafka brokers when authenticating using SASL and `socketChannel.socket().getInetAddress().getHostName()` must match the hostname in `principal/hostname@realm` Kafka Client will go to AUTHENTICATION_FAILED state.)Caused by: org.ietf.jgss.GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7) - LOOKING_UP_SERVER)
  • [中间件类] Kafka的kerberos认证
    【功能模块】System.setProperty("java.security.auth.login.config", jaasPath);这个系统配置,无法通过,并且我的jaas.conf文件 放在linux目录下,是可以访问到的,程序已经测试过了,什么原因,我使用的是华为云MRS服务2.1.x版本的样例工程下载地址为:https://github.com/huaweicloud/huaweicloud-mrs-example/tree/mrs-2.1。这个示例代码。
  • [生态对接] 绕过Kerberos连接Zookeeper(像使用开源ZK一样)
    【功能模块】通过使用21005端口可以绕过Kerberos连接Kafka,通过什么方式可以绕过Kerberos连接Zookeeper像使用开源ZK一样)?【操作步骤&问题现象】1、2、【截图信息】【日志信息】(可选,上传日志内容或者附件)
  • [教程指导] 如何修改MRS集群安全模式中Kerberos有效时间?
    1. 以root用户登录到集群客户端节点。2. 执行以下命令登录到Kerberos控制台。cd /opt/Bigdata/clientsource bigdata_envkadmin -p kadmin/admin首次登录需要修改密码默认密码为Admin@123,kadmin密码无法恢复,请一定保存好修改后的密码。3. 登录后使用如下命令修改,此处以admin用户举例,也可以使用其他用户。此参数默认是24小时,最短可修改为15分钟。modprinc -maxlife 15min admin  正确的话会提示修改成功:Principal "admin@HADOOP.COM" modified.完成后执行q退出控制台。如果要修改为24小时,则执行:modprinc -maxlife 24h admin 4. 在环境中使用上述用户进行kinit操作,之后进行klist,可以看到有效期已经修改为15分钟,如下kinit adminklist 显示如下:Ticket cache: FILE:/tmp/l00146419Default principal: admin@HADOOP.COMValid starting      Expires             Service principal05/13/2022 12:00:00  05/14/2022 12:15:00  krbtgt/HADOOP.COM@HADOOP.COM
  • [问题求助] 【FI】【Kafka(KCluster)】开启kerberos认证后,代码无法从kafka集群中获取topic
    如题所示,报下面的错误,借鉴网上https://blog.csdn.net/li1987by/article/details/82856873的改法。将kafka-clients-2.6.0.jar 替换为kafka-clients-1.1.0.jar后,改错误消失,但是在哪里可以下载到华为kafka-clients-2.6.0.jar 的jar??我们需要后面spark 任务需要kafka-client2.6
  • [问题求助] 【FI】【Kafka(KCluster)】开启kerberos认证后,代码无法从kafka集群中获取topic
    如题所述,kafka-console-producer.sh & kafka-console-consumer.sh 执行正常。但是代码执行失败, kerberos验证开启debug模式后,日志观察成功。jar引用是华为kafka client libs。kafka版本为:2.11-1.1.0哪位大佬给指点下,万分感谢。----------------------------String krb5 = args[0];String jaasPath = args[1];String broker= args[2];// todoSystem.setProperty("java.security.krb5.conf", krb5);System.setProperty("java.security.auth.login.config", jaasPath);System.setProperty("zookeeper.server.principal", "zookeeper/hadoop.hadoop.com");Properties props = new Properties();props.put("bootstrap.servers", broker);props.put("group.id", "g1");props.put("key.deserializer", StringDeserializer.class.getName());props.put("value.deserializer", ByteBufferDeserializer.class.getName());// todoprops.put("security.protocol", "SASL_PLAINTEXT");props.put("sasl.mechanism", "GSSAPI");props.put("sasl.kerberos.service.name", "kafka");// adminclientAdminClient client = AdminClient.create(props);ListTopicsResult listTopics = client.listTopics();Set<String> strings = listTopics.names().get();System.out.println(strings);client.close();// kafkaconsumerKafkaConsumer<Object, Object> consumer = new KafkaConsumer<>(props);Map<String, List<PartitionInfo>> stringListMap = consumer.listTopics();System.out.println(stringListMap.size());consumer.close();
  • [问题求助] Hbase的kerberos认证失败
    我们现在客户端用的是python去连, 参考的是这个样例https://github.com/huaweicloud/huaweicloud-mrs-example/tree/mrs-1.8/src/hbase-examples/hbase-python-example但是每次transport.open()的时候, 服务端都会报这个错误2021-09-01 16:02:35,082 | ERROR | thrift-worker-24 | SASL negotiation failure | org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:307)javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: Failure unspecified at GSS-API level (Mechanism level: Checksum failed)]    at com.sun.security.sasl.gsskerb.GssKrb5Server.evaluateResponse(GssKrb5Server.java:199)    at org.apache.thrift.transport.TSaslTransport$SaslParticipant.evaluateChallengeOrResponse(TSaslTransport.java:536)    at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:277)    at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:42)    at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:217)    at org.apache.hadoop.hbase.thrift.TBoundedThreadPoolServer$ClientConnnection.run(TBoundedThreadPoolServer.java:287)    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)    at java.lang.Thread.run(Thread.java:748)Caused by: GSSException: Failure unspecified at GSS-API level (Mechanism level: Checksum failed)    at sun.security.jgss.krb5.Krb5Context.acceptSecContext(Krb5Context.java:858)    at sun.security.jgss.GSSContextImpl.acceptSecContext(GSSContextImpl.java:342)    at sun.security.jgss.GSSContextImpl.acceptSecContext(GSSContextImpl.java:285)    at com.sun.security.sasl.gsskerb.GssKrb5Server.evaluateResponse(GssKrb5Server.java:167)    ... 8 moreCaused by: KrbException: Checksum failed    at sun.security.krb5.internal.crypto.Aes256CtsHmacSha1EType.decrypt(Aes256CtsHmacSha1EType.java:102)    at sun.security.krb5.internal.crypto.Aes256CtsHmacSha1EType.decrypt(Aes256CtsHmacSha1EType.java:94)    at sun.security.krb5.EncryptedData.decrypt(EncryptedData.java:175)    at sun.security.krb5.KrbApReq.authenticate(KrbApReq.java:281)    at sun.security.krb5.KrbApReq.<init>(KrbApReq.java:149)    at sun.security.jgss.krb5.InitSecContextToken.<init>(InitSecContextToken.java:140)    at sun.security.jgss.krb5.Krb5Context.acceptSecContext(Krb5Context.java:831)    ... 11 moreCaused by: java.security.GeneralSecurityException: Checksum failed    at sun.security.krb5.internal.crypto.dk.AesDkCrypto.decryptCTS(AesDkCrypto.java:451)    at sun.security.krb5.internal.crypto.dk.AesDkCrypto.decrypt(AesDkCrypto.java:272)    at sun.security.krb5.internal.crypto.Aes256.decrypt(Aes256.java:76)    at sun.security.krb5.internal.crypto.Aes256CtsHmacSha1EType.decrypt(Aes256CtsHmacSha1EType.java:100)    ... 17 more请哪位专家帮忙分析一下,可能是什么原因
  • [典型案例] FA801修改用户密码错误
    【故障类型】:FA801修改用户密码错误【使用版本】:FusionAccess 801【案例作者】:【关 键 词】: FA801客户端修改密码错误【问题现象】:【告警信息】:【问题分析】:(1)局点报告,FA801使用客户端修改密码,报用户名密码错误,但是用户输入是正确的。登录也没有问题。(2)检查WI日志。错误信息返回为null。(3)检查liteas日志,发现是kerberos预认证是异常,kerberos无法获取到票据。(4)在liteas上抓包,抓包后过滤kerberos,发现kerberos错误均为以下报错。(5)查询得知这个报错,是时间不同,检查现网环境,果然是liteas时间比AD时间快了4分钟。(6)经过反复测试,发现当liteas时间比AD快2分钟以上,就会触发。而慢5分钟才会报错。(7)由于801环境,默认基础架构虚拟机是找本地域(liteas),即使对接了AD,基础架构虚拟机还是找liteas同步时间。liteas找上层时钟源同步时间。(8)配置上层时钟源,使liteas与AD均找上层时钟源同步时间。【解决方法】将liteas时间调整的与AD一致。
  • [二次开发] 【flink产品】【flink功能】flink跨集群访问开启Kerberos认证的kafka
    【功能模块】FI集群的flink 是开启的Kerberos认证 再flink-conf.yml中配置过的。现在需要用flink消费另外一个集群中开启Kerberos认证的kafka  拿到了kafka集群的的认证文件 krb5.conf jaas.conf user.keytab。【操作步骤&问题现象】1、整个flink应用是需要自己集群的认证 ,怎么在一个flink程序中认证另外一个kafka ,将其当作该Flink应用的source源,然后处理呢?拜托各位大佬提供一下处理思路!
  • [二次开发] 使用开源canal采集binlog日志到kafka,说是server not found in kerberos databas
    【功能模块】【操作步骤&问题现象】1、具体报错信息如下:2、【截图信息】【日志信息】(可选,上传日志内容或者附件)
  • [数据集成] 使用Kettle 8.3对接开启Kerberos认证的MRS集群的HDFS组件
    ## 背景 Kettel是一款比较最容易上手的开源ETL工具,在很多数据集成的项目中都有使用, Kettel商业化以后改名为Pentaho,Pentaho推出了商业化的付费产品套件,在商用版本中增强了很多企业级的能力,当前Kettle支持Kerberos认证Hadoop集群的能力是放在其商用版本中的,开源版本不支持Kerberos认证的Hadoop集群 本文通过简单的插件改造,使得Kettle 8.3能够连接开启Kerberos认证的MRS集群的HDFS组件 ## 对接方法 参考博文原文 [**链接**](https://bbs.huaweicloud.com/blogs/255354)