-
habse 每次写入提交的条数多少条合适?为什么?
-
flume 怎么实现平替--CDL?
-
为什么不建议Flume和DataNode部署在同一节点?为什么会存在数据不均衡的风险?
-
Python对接安全kafka相关版本MRS: 820Kafka版本: 2.4.0Python相关依赖 Python: 3.8.8Python相关依赖./pip3 freeze | grep kafkaconfluent-kafka==2.2.0 kafka==1.3.5 kafka-python==2.0.2./pip3 freeze | grep krbticketkrbticket==1.0.6./pip3 freeze | grep gssapigssapi==1.8.3Producer代码from krbticket import KrbConfig, KrbCommand import os from kafka import KafkaProducer import json jaas_conf = os.path.join('', '/opt/kafka_jaas.conf') krb5_conf = os.path.join('', '/opt/sandbox/krb5.conf') user_name = 'sandbox' keytab_conf = os.path.join('', f'/opt/sandbox/user.keytab') jaas_conf = os.path.join( '/opt/kafka_jaas.conf') os.environ['KRB5CCNAME'] = os.path.join('', f'/tmp//krb5cc_0') kconfig = KrbConfig(principal='sandbox@HADOOP.COM', keytab=keytab_conf) KrbCommand.kinit(kconfig) os.environ['KAFKA_OPTS'] = f'-Djava.security.auth.login.config={jaas_conf}' \ f' -Djava.security.krb5.conf={krb5_conf}' producer = KafkaProducer(bootstrap_servers=['xxx.xxx.xx.xx:21007'], security_protocol='SASL_PLAINTEXT', sasl_mechanism='GSSAPI', sasl_kerberos_service_name='kafka', sasl_kerberos_domain_name='hadoop.hadoop.com', api_version=(2,4,0)) import json msg = json.dumps("haha").encode() producer.send('aaa',msg)注意:需准备认证的有户名,keytab文件,krb5.conf文件创建kafka_jaas.conf文件,内容参考KafkaClient { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true principal="sandbox@HADOOP.COM" keyTab="/opt/sandbox/user.keytab" useTicketCache=false serviceName="kafka" storeKey=true debug=true; };需要检查集群域名,配置sasl_kerberos_domain_name参数,默认是hadoop.hadoop.com,如果集群更改域名需更改注意api_version参数和MRS kafka版本对应produce生产数据的时候要对数据value做序列化,参考代码最后写入json转码部分kafka topic名字是aaa测试效果:消费结果Consumer代码from kafka import KafkaConsumer from kafka import KafkaProducer from kafka.errors import KafkaError import sys from krbticket import KrbConfig, KrbCommand import os from kafka import KafkaProducer import json jaas_conf = os.path.join('', '/opt/kafka_jaas.conf') krb5_conf = os.path.join('', '/opt/sandbox/krb5.conf') user_name = 'sandbox' keytab_conf = os.path.join('', f'/opt/sandbox/user.keytab') jaas_conf = os.path.join( '/opt/kafka_jaas.conf') os.environ['KRB5CCNAME'] = os.path.join('', f'/tmp//krb5cc_0') kconfig = KrbConfig(principal='sandbox@HADOOP.COM', keytab=keytab_conf) KrbCommand.kinit(kconfig) os.environ['KAFKA_OPTS'] = f'-Djava.security.auth.login.config={jaas_conf}' \ f' -Djava.security.krb5.conf={krb5_conf}' consumer2 = KafkaConsumer('aaa', bootstrap_servers=['xxx.xx.x.xxx:21007'], security_protocol='SASL_PLAINTEXT', sasl_mechanism='GSSAPI', auto_offset_reset='earliest', group_id='python_mfa_group', sasl_kerberos_service_name='kafka', sasl_kerberos_domain_name='hadoop.hadoop.com', api_version=(2,4,0)) for message in consumer2: # message value and key are raw bytes -- decode if necessary! # e.g., for unicode: `message.value.decode('utf-8')` print ("%s:%d:%d: key=%s value=%s" % (message.topic, message.partition, message.offset, message.key, message.value))查看效果客户端生产查看结果FAQ创建consumer报错 问题原因:需要在代码中做kerberos认证,以及确认kafka版本填入到API参数中 在produce数据的时候报错: 问题原因,该方法produce是需要序列化,参考下图更改 在produce数据的时候超时报错:同时consume的时候报错:问题原因:Python依赖gssapi没有安装,使用如下命令安装./pip3 install -i https://pypi.tuna.tsinghua.edu.cn/simple gssapi
-
如何获取HD 6.5.1 CTBase Java API 二次开发指导文档或样例代码?
-
集群环境:生成环境问题描述:某单一数据节点已完成操作系统的重装,现在需在manager页面重装主机,疑惑的点是重装主机之前需不需要做preinstall和precheck?
-
FusionInsight HD 6517版本集群,hbase如何实现两张表数据一致性比对?
-
Python对接clickhouseclickhouse通用jdbc端口clickhouse jdbc接口使用HTTP协议,具体对应华为clickhouse端口可以在Manager->clickhouse页面 逻辑集群部分查看针对非加密、加密端口,对接使用的jdbc url有区别,具体如下非加密端口 21426 对应jdbc连接 url为: jdbc:clickhouse://x.x.x.x:21426/default加密端口 21428 对应jdbc连接 url为: jdbc:clickhouse://x.x.x.x:21428/default?ssl=true&sslmode=none其中连接的ip 为 clickhouse balancer实例对应的ip注意:本次使用非加密端口进行对接前提条件安装python3环境,以及需要连接的MRS集群环境 1、 下载python3 源码 编译tar zxvf Python-3.8.0.tgz cd Python-3.8.0 mkdir -p /usr/local/python-3.8.0 ./configure --prefix=/usr/local/python-3.8.0 -enable-optimizations --with-ssl make && make install 编译 ln -s /usr/local/python-3.8.0/bin/python3 /usr/bin/python3 ln -s /usr/local/python-3.8.0/bin/pip3 /usr/bin/pip3 ll /usr/bin/python*使用通用jdbc进行连接安装对应依赖./pip3 install jpype1==1.4.1 ./pip3 install JayDeBeApi==1.2.3Python代码import jaydebeapi import jpype import os conn = jaydebeapi.connect("ru.yandex.clickhouse.ClickHouseDriver","jdbc:clickhouse://x.x.x.x:21426/default",["username","passwd"],jars=['/opt/lyf/lib1/clickhouse-jdbc-0.3.1-h0.cbu.mrs.320.r11.jar','/opt/lyf/lib1/commons-codec-1.15.jar','/opt/lyf/lib1/commons-logging-1.2.jar','/opt/lyf/lib1/httpclient-4.5.13.jar','/opt/lyf/lib1/httpcore-4.4.13.jar','/opt/lyf/lib1/lz4-java-1.7.1.jar','/opt/lyf/lib1/slf4j-api-1.7.36.jar','/opt/lyf/lib1/us-common-1.0.66.jar','/opt/lyf/lib1/bcprov-jdk15on-1.70.jar']) import pandas as pd sql = "Select * From addressbook" df_ck = pd.read_sql(sql, conn) df_ck conn.close()注解: 将所需的lib文件放在对应目录下commons-codec-1.15.jar commons-logging-1.2.jar httpclient-4.5.13.jar httpcore-4.4.13.jar lz4-java-1.7.1.jar slf4j-api-1.7.36.jar bcprov-jdk15on-1.70.jar clickhouse-jdbc-0.3.1-h0.cbu.mrs.320.r11.jar us-common-1.0.66.jar使用clickhouse_connect进行连接安装对应依赖./pip3 install clickhouse_connect==0.6.4 python代码import clickhouse_connect client = clickhouse_connect.get_client(host='x.x.x.x', port=21426, username='username', password='passwd') client.command('show tables') client.command('select * from people')注:参照cid:link_0FAQPython代码执行报错classnotfoundconn = jaydebeapi.connect("ru.yandex.clickhouse.ClickHouseDriver","jdbc:clickhouse:// x.x.x.x:port/default?ssl=true&sslmode=none?user=username&password=passwd!@",jars=['/opt/lyf/lib1/clickhouse-jdbc-0.3.1-h0.cbu.mrs.320.r11.jar'])查看该方法源码 help(jaydebeapi.connect)解决方法: 修改connection的参数设置为conn = jaydebeapi.connect("ru.yandex.clickhouse.ClickHouseDriver","jdbc:clickhouse://x.x.x.x:port/default",["username","passwd"],jars=['/opt/lyf/lib1/clickhouse-jdbc-0.3.1-h0.cbu.mrs.320.r11.jar','/opt/lyf/lib1/commons-codec-1.15.jar','/opt/lyf/lib1/commons-logging-1.2.jar','/opt/lyf/lib1/httpclient-4.5.13.jar','/opt/lyf/lib1/httpcore-4.4.13.jar','/opt/lyf/lib1/lz4-java-1.7.1.jar','/opt/lyf/lib1/slf4j-api-1.7.36.jar','/opt/lyf/lib1/us-common-1.0.66.jar','/opt/lyf/lib1/bcprov-jdk15on-1.70.jar'])python代码执行报错classnotfoundgrep –R ‘x.x.x.x’ 文件夹找到对应的jar包放到对应目录下,在jars参数上加上对应的jar包
-
Kafka集群为什么会建议集群总分区数不超过10000?总分区数要减去副本分区数不?
-
Loader简介Loader是实现FusionInsight HD与关系型数据库、文件系统之间交互数据和文件的数据加载工具。基于开源Sqoop研发,做了大量优化和扩展。提供可视化向导式的作业配置管理界面;提供定时调度任务,周期性执行Loader作业;在界面中可指定多种不同的数据源、配置数据的清洗和转换步骤、配置集群存储系统等。Loader的特点图形化:提供图形化配置、监控界面,操作简便。高性能:利用MapReduce并行处理数据。高可靠:Loader Server采用主备双机;作业通过MapReduce执行,支持失败重试;作业失败后,不会残留数据。安全:Kerberos认证;作业权限管理。背景loader界面提供了任务历史记录以及运行状态等信息展示,客户需要通过RestAPI方式获取任务状态以及历史记录信息关键点1.交互过程使用kerberos认证,http访问场景下也叫做spnego认证2.同组件原生界面交互时需要跟服务端做ssl3.认证文件user.keytab文件以及krb5.conf放置于conf目录下代码解读 //loader String url = "https://x.x.x.x:20026/Loader/LoaderServer/124/loader/v1/job/all?paged=true&offset=1&limit=2&kw=&group=0&order=desc&order-by=cdate"; // String url = "https://x.x.x.x:20026/Loader/LoaderServer/124/loader/v1/submission/history/3?paged=true&limit=10&offset=1"; System.out.println("PATH_TO_KEYTAB " + PATH_TO_KEYTAB); System.setProperty("java.security.krb5.conf", PATH_TO_KRB5_CONF); System.setProperty("javax.security.auth.useSubjectCredsOnly", "true");loader端口为20026,第一个url为获取所有任务信息,limit可以修改获取任务条数;第二个url为获取单个任务历史记录,history后为用户id,可以修改运行样例:获取全部任务:获取单个任务历史记录:返回json{"all":[{"exception":"","counters":{"org.apache.hadoop.mapreduce.FileSystemCounter":{"FILE_LARGE_READ_OPS":0,"HDFS_BYTES_READ_EC":0,"FILE_WRITE_OPS":0,"HDFS_READ_OPS":270,"HDFS_BYTES_READ":3659,"HDFS_LARGE_READ_OPS":0,"FILE_READ_OPS":0,"FILE_BYTES_WRITTEN":11554899,"FILE_BYTES_READ":0,"HDFS_WRITE_OPS":91,"HDFS_BYTES_WRITTEN":166800009},"org.apache.hadoop.mapreduce.lib.output.FileOutputFormatCounter":{"BYTES_WRITTEN":166800009},"org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter":{"BYTES_READ":0},"org.apache.hadoop.mapreduce.JobCounter":{"TOTAL_LAUNCHED_MAPS":30,"MB_MILLIS_MAPS":3549155328,"SLOTS_MILLIS_REDUCES":0,"VCORES_MILLIS_MAPS":866493,"SLOTS_MILLIS_MAPS":3465972,"OTHER_LOCAL_MAPS":30,"MILLIS_MAPS":866493},"org.apache.sqoop.submission.counter.SqoopCounters":{"ROWS_SKIPPED":0,"ROWS_READ":5903983,"ROWS_WRITTEN":5903983},"org.apache.hadoop.mapreduce.TaskCounter":{"SPILLED_RECORDS":0,"MERGED_MAP_OUTPUTS":0,"VIRTUAL_MEMORY_BYTES":124308541440,"MAP_INPUT_RECORDS":0,"MAP_PHYSICAL_MEMORY_BYTES_MAX":541884416,"SPLIT_RAW_BYTES":3659,"FAILED_SHUFFLE":0,"PHYSICAL_MEMORY_BYTES":15452405760,"GC_TIME_MILLIS":15096,"MAP_VIRTUAL_MEMORY_BYTES_MAX":4216520704,"MAP_OUTPUT_RECORDS":5903983,"CPU_MILLISECONDS":817330,"COMMITTED_HEAP_BYTES":18252038144}},"last-update-date":1686644843155,"last-udpate-user":"xxx","output":"--: --","input":"MYSQL: server_diskspace[null]","caller":-1,"creation-user":"xxx","progress":1.0,"creation-date":1686644784844,"external-id":"job_1680145428800_0188","dirty-data-link":"http:\/\/172-16-9-118:25002\/explorer.html#\/user\/loader\/etl_dirty_data_dir\/1\/1680145428800_0188","job":1,"external-link":"http:\/\/172-16-4-22:26000\/proxy\/application_1680145428800_0188\/","status":"SUCCEEDED"}],"total-num":1}任务执行失败返回json{"all":[{"output":"--: --","exception":"执行SQL语句失败。 原因: sql execute error","input":"MYSQL: people[null]","caller":-1,"creation-user":"xxx","progress":-1.0,"creation-date":1686644587315,"last-update-date":1686644587315,"dirty-data-link":"","job":3,"last-udpate-user":"xxx","status":"FAILURE_ON_SUBMIT"}],"total-num":1}
-
线下的FusionInsight HD 6513版本集群spark2X是否有方案关闭kerberos
-
1 背景Yarn的任务的统计结果在HDFS的指定文件(/mr-history/done/2023/05/15 目录 xxx.jhist 文件)存放。解析该文件,即可最小化影响HDFS性能(统计每个MR的counter只需访问1次HDFS,获取该20K文件,然后在客户端解析文件内容即可)2 获取xxx.jhist文件查看对应日期的jhist文件(如:2023年5月23日):hdfs dfs -ls /mr-history/done/2023/05/23/000000下载jhist文件:hdfs dfs -get /mr-history/done/2023/05/23/000000/job_1683342225080_0138-1684848693047-Loader%3A+testsftp2hive_1684848475163-1684848723140-1-0-SUCCEEDED-default-1684848703642.jhist3 解析counter示例使用java代码解析counter3.1 添加依赖<dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-common</artifactId> <version>${hadoop.version}</version></dependency><dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-mapreduce-client-common</artifactId> <version>${hadoop.version}</version></dependency><dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-mapreduce-client-hs</artifactId> <version>${hadoop.version}</version></dependency><dependency> <groupId>org.mockito</groupId> <artifactId>mockito-all</artifactId> <version>1.8.5</version></dependency>3.2 解析代码package com.huawei.bigdata.mapreduce.examples;import static org.mockito.Mockito.mock;import org.apache.hadoop.mapreduce.Counter;import org.apache.hadoop.mapreduce.CounterGroup;import org.apache.hadoop.mapreduce.v2.hs.CompletedJob;import org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.HistoryFileInfo;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.fs.Path;import org.apache.hadoop.mapred.JobACLsManager;import org.apache.hadoop.mapreduce.v2.api.records.JobId;import java.io.IOException;public class TestYarnCounter { public static void main(String args[]) { Path fullHistoryPath = new Path("D:\\history\ job_1683342225080_0138-1684848693047-Loader%3A+testsftp2hive_1684848475163-1684848723140-1-0-SUCCEEDED-default-1684848703642.jhist"); Configuration conf = new Configuration(); boolean loadTasks = false; HistoryFileInfo info = mock(HistoryFileInfo.class); JobId jobId = null; JobACLsManager jobAclsManager = new JobACLsManager(conf); try { CompletedJob completedJob = new CompletedJob(conf, jobId, fullHistoryPath, loadTasks, "user", info, jobAclsManager); CounterGroup counterGroup = completedJob.getAllCounters() .getGroup("org.apache.hadoop.mapreduce.FileSystemCounter"); for (Counter counter : counterGroup) { System.out.println(counter.getName() + ":" + counter.getValue()); } } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } }}4 运行结果FILE_BYTES_READ:0FILE_BYTES_WRITTEN:393363FILE_READ_OPS:0FILE_LARGE_READ_OPS:0FILE_WRITE_OPS:0HDFS_BYTES_READ:204HDFS_BYTES_WRITTEN:153HDFS_READ_OPS:3HDFS_LARGE_READ_OPS:0HDFS_WRITE_OPS:3运行的结果也可在ResourceManager WebUI页面查看:点击对应的application id,显示如下页面 在该页面点击history,查看counter信息
-
1 text_en_splitting_tight分词名称分词器和过滤器输入分词效果text_en_splitting_tight1.中英文根据空格分词2.替换成同义词,比如搜索北大换成北京大学3.删除停顿词,比如a an but4.把特殊符号去掉,比如wi-fi 替换成wifi5.大写转换成小写6.保护词免于被分词器修改7.英文单词复数变单数形势(比如dogs变成dog)8.避免重复处理我们 的祖国 名称 是 ChI_na, we are 北大 dogs我们的祖国名称是chinawe北京大学dog1.1 创建分词器curl -XPUT --tlsv1.2 --negotiate -k -u : "https://xx.xx:24100/h0323?pretty" -H 'Content-Type:application/json' -d'{"settings": {"analysis": {"char_filter": {"my_char_filter": {"type": "mapping","mappings": ["北大 =>北京大学","_ => "]}},"filter": {"my_stopword": {"type": "stop","stopwords": ["a","an", "but","are"]}},"tokenizer":{"my_tokenizer":{"type":"pattern","pattern":"[ ]"}},"analyzer":{"text_en_splitting_tight":{"type":"custom","char_filter":["my_char_filter"],"filter":["my_stopword","lowercase"],"tokenizer":"my_tokenizer"}}}}}'1.2 输入查询curl -XGET --tlsv1.2 --negotiate -k -u : "https://xx.xx:24100/h0323/_analyze?pretty" -H 'Content-Type:application/json' -d'{ "analyzer":"text_en_splitting_tight","text":"我们 的祖国 名称 是 ChI_na, we are 北大 dogs"}'2 text_general分词名称分词器和过滤器输入分词效果text_generalIndex1.自动给拆分成的单个词添加type2.删除停顿词,比如a an but3.大写转换成小写我们 的祖国 名称 是 ChI_na, we are 北大 dogs我们的祖国名称是chinawe北大dogsquery1.自动给拆分成的单个词添加type2.删除停顿词,比如a an but3.替换成同义词,比如搜索北大换成北京大学4.大写转换成小写我们 的祖国 名称 是 ChI_na, we are 北大 dogs我们的祖国名称是chinawe北京大学dogs2.1 分词创建-indexcurl -XPUT --tlsv1.2 --negotiate -k -u : "https://xx.xx:24100/h0323?pretty" -H 'Content-Type:application/json' -d'{"settings": {"analysis": {"char_filter": {"my_char_filter": {"type": "mapping","mappings": ["_ => "]}},"filter": {"my_stopword": {"type": "stop","stopwords": ["a","an", "but","are"]}},"tokenizer":{"my_tokenizer":{"type":"pattern","pattern":"[ ]"}},"analyzer":{"text_general":{"type":"custom","char_filter":["my_char_filter"],"filter":["my_stopword","lowercase"],"tokenizer":"my_tokenizer"}}}}}'2.2 输入查询-indexcurl -XGET --tlsv1.2 --negotiate -k -u : "https://xx.xx:24100/h0323/_analyze?pretty" -H 'Content-Type:application/json' -d'{ "analyzer":"text_general","text":"我们 的祖国 名称 是 ChI_na, we are 北大 dogs"}'2.3 分词创建-querycurl -XPUT --tlsv1.2 --negotiate -k -u : "https://xx.xx:24100/h0323?pretty" -H 'Content-Type:application/json' -d'{"settings": {"analysis": {"char_filter": {"my_char_filter": {"type": "mapping","mappings": ["北大 =>北京大学","_ => "]}},"filter": {"my_stopword": {"type": "stop","stopwords": ["a","an", "but","are"]}},"tokenizer":{"my_tokenizer":{"type":"pattern","pattern":"[ ]"}},"analyzer":{"text_general":{"type":"custom","char_filter":["my_char_filter"],"filter":["my_stopword","lowercase"],"tokenizer":"my_tokenizer"}}}}}'2.4 输入查询-querycurl -XGET --tlsv1.2 --negotiate -k -u : "https://xx.xx:24100/h0323/_analyze?pretty" -H 'Content-Type:application/json' -d'{ "analyzer":"text_general","text":"我们 的祖国 名称 是 ChI_na, we are 北大 dogs"}'
-
Lucene(或Elasticsearch)使用布尔模型(Boolean model) 查找匹配文档,并用一个名为实用评分函数(practical scoring function) 的公式来计算相关度。ES中的自定义评分机制function_score主要用于让用户自定义查询相关性得分,实现精细化控制评分的目的详细参考: https://www.elastic.co/guide/cn/elasticsearch/guide/current/practical-scoring-function.html1 创建索引curl -XPUT cid:link_02 创建mappingcurl -H "Content-Type: application/json" -XPUT cid:link_0/video/_mapping?include_type_name=true -d '{ "video": { "properties": { "title": { "type": "text", "analyzer": "snowball" }, "description": { "type": "text", "analyzer": "snowball" }, "views": { "type": "integer" }, "likes": { "type": "integer" }, "created_at": { "type": "date" } } }}'3 添加数据curl -H "Content-Type: application/json" -XPUT cid:link_0/video/1 -d '{ "title": "Sick Sad World: Cold Breeze on the Interstate", "description": "Is your toll collector wearing pants a skirt or nothing but a smile Cold Breeze on the Interstate next on Sick ", "views": 500, "likes":2, "created_at": "2023-04-22T08:00:00"}'curl -H "Content-Type: application/json" -XPUT cid:link_0/video/2 -d '{ "title": "Sick Sad World: The Severed Pianist", "description": "When he turned up his nose at accordion lessons, they cut off his inheritance molto allegro. The Severed Pianist, ne", "views": 6000, "likes": 100, "created_at": "2023-04-22T12:00:00"}'curl -H "Content-Type: application/json" -XPUT cid:link_0/video/3 -d '{ "title": "Sick Sad World: Avant Garde Obstetrician", "description": "Meet the avant-garde obstetrician who has turned his cast offs into art work. Severed Umbilical cord sculpture next,", "views": 100, "likes": 130, "created_at": "2023-04-22T23:00:00"}'4 计算分数错误样例:curl -H "Content-Type: application/json" -XPOST cid:link_0/video/_search -d '{ "query": { "function_score": { "query": { "match": { "_all": "severed" } }, "script_score": { "script": "_score * Math.log(doc['likes'].value + doc['views'].value + 1)" } } }}'正确样例,注意单引号 \u0027A、使用ES内置的script_score方法计算分数curl -X GET "cid:link_0/video/_search?pretty" -H 'Content-Type: application/json' –d '{ "query": { "function_score": { "query": { "match": { "_all": "severed" } }, "script_score": { "script": { "source": "Math.log(2 + doc[\u0027likes\u0027].value)" } } } }}'输出结果:{ "took" : 3, "timed_out" : false, "_shards" : { "total" : 1, "successful" : 1, "skipped" : 0, "failed" : 0 }, "hits" : { "total" : { "value" : 0, "relation" : "eq" }, "max_score" : null, "hits" : [ ] }}B、使用衰减函数linear计算分数curl -H "Content-Type: application/json" -XPOST cid:link_0/video/_search -d ' { "query": { "function_score": { "functions": [ { "linear": { "views": { "origin": 5000, "scale": 2500 } } }, { "linear": { "likes": { "origin": 200, "scale": 90 } } } ] } }}'输出结果:{ "took": 5, "timed_out": false, "_shards": { "total": 1, "successful": 1, "skipped": 0, "failed": 0 }, "hits": { "total": { "value": 3, "relation": "eq" }, "max_score": 0.35555556, "hits": [ { "_index": "searchtub_2", "_type": "video", "_id": "2", "_score": 0.35555556, "_source": { "title": "Sick Sad World: The Severed Pianist", "description": "When he turned up his nose at accordion lessons, they cut off his inheritance molto allegro. The Severed Pianist, ne", "views": 6000, "likes": 100, "created_at": "2023-04-22T12:00:00" } }, { "_index": "searchtub_2", "_type": "video", "_id": "3", "_score": 0.012222222, "_source": { "title": "Sick Sad World: Avant Garde Obstetrician", "description": "Meet the avant-garde obstetrician who has turned his cast offs into art work. Severed Umbilical cord sculpture next,", "views": 100, "likes": 130, "created_at": "2023-04-22T23:00:00" } }, { "_index": "searchtub_2", "_type": "video", "_id": "1", "_score": 0, "_source": { "title": "Sick Sad World: Cold Breeze on the Interstate", "description": "Is your toll collector wearing pants a skirt or nothing but a smile Cold Breeze on the Interstate next on Sick ", "views": 500, "likes": 2, "created_at": "2023-04-22T08:00:00" } } ] }}
-
springboot调用hdfs1 HDFS简介HDFS(Hadoop Distribute File System)是一个适合运行在通用硬件之上,具备高度容错特性,支持高吞吐量数据访问的分布式文件系统,非常适合大规模数据集应用2 样例背景HDFS的业务操作对象是文件,代码样例中所涉及的文件操作主要包括创建文件夹写文件追加文件内容读文件删除文件/文件夹HDFS还有其他的业务处理,例如设置文件权限等,其他操作可以在掌握本代码样例之后,再扩展学习。3 Windows环境样例调用步骤环境准备https://bbs.huaweicloud.com/forum/thread-88552-1-1.html比对时间,与集群时间误差不能超过5分钟检查 C:\Windows\System32\drivers\etc\hosts文件中是否包含所有集群节点的域名IP映射信息在IDEA打开样例代码中的hdfs-springboot目录,默认会自动下载依赖,如未下载,选中该目录下的pom.xml文件,右键点击“Add As Maven Project”后等待项目自动将依赖下载完毕从Manager界面下载用户认证凭据后,解压缩获取秘钥文件user.keytab和krb5.conf从客户端 /opt/client/HDFS/hadoop/etc/hadoop 目录中获取core-site.xml和hdfs-site.xml把上面获取的user.keytab\krb5.conf\core-stie.xml\hdfs-site.xml四个文件放到统一目录下配置application.properties中的用户名和配置文件存放目录(第7步文件的目录)打开测试类 HDFApplication.java, 文件右键执行Run 运行代码调用接口 POST cid:link_0创建hdfs目录4 Linux环境调试步骤完成Windows环境样例调用步骤在windows环境中执行打包检查linux环境时间与集群误差不超过5分钟检查linux环境的JDK版本为1.8检查linux环境的/etc/hosts文件中包含所有集群节点的域名IP映射信息创建样例执行路径,例如/opt/hdfstest上传windows环境打包后生成的target目录下的 hdfd-springboot-1.0-SNAPSHOT.jar 包到/opt/hdfstest目录上传windows环境中调试通过后的配置文件到/opt/hdfstest/conf目录配置application.properties中的用户名和配置文件存放目录(/opt/hdfstest/conf/)执行如下命令启动服务java –jar hdfs-springboot-1.0-SNAPSHOT.jar
上滑加载中
推荐直播
-
空中宣讲会 2025年华为软件精英挑战赛
2025/03/10 周一 18:00-19:00
宸睿 华为云存储技术专家、ACM-ICPC WorldFinal经验 晖哥
2025华为软挑赛空中宣讲会重磅来袭!完整赛程首曝+命题天团硬核拆题+三轮幸运抽奖赢参赛助力礼包,与全国优秀高校开发者同台竞技,直通顶尖赛事起跑线!
回顾中 -
华为开发者空间玩转DeepSeek
2025/03/13 周四 19:00-20:30
马欣 华为开发者布道师
同学们,想知道如何利用华为开发者空间部署自己的DeepSeek模型吗?想了解如何用DeepSeek在云主机上探索好玩的应用吗?想探讨如何利用DeepSeek在自己的专有云主机上辅助编程吗?让我们来一场云和AI的盛宴。
即将直播 -
华为云Metastudio×DeepSeek与RAG检索优化分享
2025/03/14 周五 16:00-17:30
大海 华为云学堂技术讲师 Cocl 华为云学堂技术讲师
本次直播将带来DeepSeek数字人解决方案,以及如何使用Embedding与Rerank实现检索优化实践,为开发者与企业提供参考,助力场景落地。
去报名
热门标签