• [知识分享] Google Play Console发布的应用一直在in review状态
    在 Google Play Console 中发布的应用长时间处于【in review】状态可能会让人感到困惑和焦虑。以下是一些可能的原因和处理方法,希望能帮助你加速审核过程:常见原因应用数量激增:有时,Google Play 团队会收到大量的新应用提交,导致审核积压。应用内容复杂:某些应用因为内容或功能比较复杂,可能需要更长的时间进行审核,尤其是涉及隐私、权限、金融交易等敏感内容。政策合规检查:如果应用涉及到广告、支付、数据收集等,需要更加严格的政策合规检查。技术问题:偶尔也会出现技术性问题导致应用卡在审核阶段。处理方法1. 确认应用信息准确无误确保你的应用描述、权限声明、隐私政策等信息准确无误且符合 Google Play 的规范。2. 检查邮件通知Google 在审核过程中通常会通过邮件与开发者联系。如果你的应用被退回或者需要更多信息,Google 会发送邮件通知你。在垃圾邮件文件夹中也检查一下是否错过了邮件。3. 更新应用版本如果确实等了很久,可以尝试更新应用版本,有时可以触发重新审核。不过,在更新之前,确保解决了所有可能导致审核延迟的问题。4. 联系 Google Play 支持团队如果应用长时间没有动静,可以直接联系 Google Play 支持团队,获取更多关于审核状态的信息。步骤如下:登录 Google Play Console。点击左侧菜单中的 "Help & feedback" 或 "帮助和反馈"。在搜索框中输入 "contact support" 或 "联系支持"。根据提示填写相关信息并提交请求。5. 参与 Google Play 社区在 Google Play 开发者论坛 上提问,看看是否有其他开发者遇到类似问题,并分享解决经验。模板邮件(联系 Google 支持)Subject: Application Stuck in Review for Several Weeks Hello Google Play Support Team, I hope this message finds you well. I am writing to inquire about the status of my application submission, which has been stuck in the "In Review" stage for several weeks. Here are the details of my application: - **Application Name**: [Your Application Name] - **Package Name**: [com.yourcompany.yourapp] - **Submission Date**: [Submission Date] - **Current Status**: In Review I have thoroughly checked all the required fields and ensured that my application complies with the Google Play policies. However, it has been [X weeks] since I submitted my application, and there has been no change in its status. Could you please provide an update on the current status of my application or any additional information that may be required to expedite the review process? Thank you for your assistance. Best regards, [Your Name] [Your Developer Account Email]结论应用长时间处于【in review】状态确实令人着急,但通过以上方法,你可以积极主动地推动审核进程。
  • [热门活动] mother and baby,animal world,mother always love her baby
    mother and baby
  • [技术干货] Java图片转base64编码
    import java.io.FileInputStream; 2 import java.io.FileOutputStream; 3 import java.io.IOException; 4 import java.io.InputStream; 5 import java.io.OutputStream; 6 7 import org.apache.commons.codec.binary.Base64; 8 9 10 /** 11 * 将图片转换为Base64<br> 12 * 将base64编码字符串解码成img图片 13 * @创建时间 2015-06-01 15:50 14 * 15 */ 16 public class Img2Base64Util { 17 18 public static void main(String[] args) { 19 String imgFile = "d:\\3.jpg";//待处理的图片 20 String imgbese=getImgStr(imgFile); 21 System.out.println(imgbese.length()); 22 System.out.println(imgbese); 23 String imgFilePath = "d:\\332.jpg";//新生成的图片 24 generateImage(imgbese,imgFilePath); 25 } 26 /** 27 * 将图片转换成Base64编码 28 * @param imgFile 待处理图片 29 * @return 30 */ 31 public static String getImgStr(String imgFile){ 32 //将图片文件转化为字节数组字符串,并对其进行Base64编码处理 33 34 35 InputStream in = null; 36 byte[] data = null; 37 //读取图片字节数组 38 try 39 { 40 in = new FileInputStream(imgFile); 41 data = new byte[in.available()]; 42 in.read(data); 43 in.close(); 44 } 45 catch (IOException e) 46 { 47 e.printStackTrace(); 48 } 49 return new String(Base64.encodeBase64(data)); 50 } 51 52 /** 53 * 对字节数组字符串进行Base64解码并生成图片 54 * @param imgStr 图片数据 55 * @param imgFilePath 保存图片全路径地址 56 * @return 57 */ 58 public static boolean generateImage(String imgStr,String imgFilePath){ 59 // 60 if (imgStr == null) //图像数据为空 61 return false; 62 63 try 64 { 65 //Base64解码 66 byte[] b = Base64.decodeBase64(imgStr); 67 for(int i=0;i<b.length;++i) 68 { 69 if(b[i]<0) 70 {//调整异常数据 71 b[i]+=256; 72 } 73 } 74 //生成jpeg图片 75 76 OutputStream out = new FileOutputStream(imgFilePath); 77 out.write(b); 78 out.flush(); 79 out.close(); 80 return true; 81 } 82 catch (Exception e) 83 { 84 return false; 85 } 86 } 87 }
  • [其他] 悲悯天使
    prompt angel of compassion
  • [Atlas200] 烧录方式制卡
    有个问题很疑惑,烧录文件方式制卡的话,图片1出的红框的SD卡应该是张空卡;那图片中前提条件的第三个点是在没有SD卡系统的情况下登录吗
  • [功能调试] 报错:YOLOv3_darknet53图片解码失败:[Decode] failed. Decode: image decode failed
    1. 系统环境硬件环境(Ascend/GPU/CPU): modelart软件环境:– MindSpore 版本: 1.5.1执行模式:动态图(PYNATIVE_MODE) – Python 版本: 3.7.6– 操作系统平台: linux2. 问题描述2.1 问题描述YOLOv3_darknet53图片解码失败2.2 报错信息2.3 脚本代码# Copyright 2020-2022 Huawei Technologies Co., Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ============================================================================ """YoloV3 train.""" import os import time import datetime import mindspore as ms import mindspore.nn as nn import mindspore.communication as comm from src.yolo import YOLOV3DarkNet53, YoloWithLossCell from src.logger import get_logger from src.util import AverageMeter, get_param_groups, cpu_affinity from src.lr_scheduler import get_lr from src.yolo_dataset import create_yolo_dataset from src.initializer import default_recurisive_init, load_yolov3_params from src.util import keep_loss_fp32 from model_utils.config import config # only useful for huawei cloud modelarts. from model_utils.moxing_adapter import moxing_wrapper, modelarts_pre_process ms.set_seed(1) def conver_training_shape(args): training_shape = [int(args.training_shape), int(args.training_shape)] return training_shape def set_graph_kernel_context(): if ms.get_context("device_target") == "GPU": ms.set_context(enable_graph_kernel=True) ms.set_context(graph_kernel_flags="--enable_parallel_fusion " "--enable_trans_op_optimize " "--disable_cluster_ops=ReduceMax,Reshape " "--enable_expand_ops=Conv2D") def network_init(args): device_id = int(os.getenv('DEVICE_ID', '0')) ms.set_context(mode=ms.GRAPH_MODE, device_target=args.device_target, save_graphs=False, device_id=device_id) set_graph_kernel_context() # Set mempool block size for improving memory utilization, which will not take effect in GRAPH_MODE if ms.get_context("mode") == ms.PYNATIVE_MODE: ms.set_context(mempool_block_size="31GB") # Since the default max memory pool available size on ascend is 30GB, # which does not meet the requirements and needs to be adjusted larger. if ms.get_context("device_target") == "Ascend": ms.set_context(max_device_memory="31GB") profiler = None if args.need_profiler: profiling_dir = os.path.join("profiling", datetime.datetime.now().strftime('%Y-%m-%d_time_%H_%M_%S')) profiler = ms.profiler.Profiler(output_path=profiling_dir) # init distributed if args.is_distributed: comm.init() args.rank = comm.get_rank() args.group_size = comm.get_group_size() if args.device_target == "GPU" and args.bind_cpu: cpu_affinity(args.rank, min(args.group_size, args.device_num)) # select for master rank save ckpt or all rank save, compatible for model parallel args.rank_save_ckpt_flag = 0 if args.is_save_on_master: if args.rank == 0: args.rank_save_ckpt_flag = 1 else: args.rank_save_ckpt_flag = 1 # logger args.outputs_dir = os.path.join(args.ckpt_path, datetime.datetime.now().strftime('%Y-%m-%d_time_%H_%M_%S')) args.logger = get_logger(args.outputs_dir, args.rank) args.logger.save_args(args) return profiler def parallel_init(args): ms.reset_auto_parallel_context() parallel_mode = ms.ParallelMode.STAND_ALONE degree = 1 if args.is_distributed: parallel_mode = ms.ParallelMode.DATA_PARALLEL degree = comm.get_group_size() ms.set_auto_parallel_context(parallel_mode=parallel_mode, gradients_mean=True, device_num=degree) @moxing_wrapper(pre_process=modelarts_pre_process) def run_train(): """Train function.""" if config.lr_scheduler == 'cosine_annealing' and config.max_epoch > config.T_max: config.T_max = config.max_epoch config.lr_epochs = list(map(int, config.lr_epochs.split(','))) config.data_root = os.path.join(config.data_dir, 'train2014') config.annFile = os.path.join(config.data_dir, 'annotations/instances_train2014.json') profiler = network_init(config) loss_meter = AverageMeter('loss') parallel_init(config) network = YOLOV3DarkNet53(is_training=True) # default is kaiming-normal default_recurisive_init(network) load_yolov3_params(config, network) network = YoloWithLossCell(network) config.logger.info('finish get network') if config.training_shape: config.multi_scale = [conver_training_shape(config)] ds = create_yolo_dataset(image_dir=config.data_root, anno_path=config.annFile, is_training=True, batch_size=config.per_batch_size, device_num=config.group_size, rank=config.rank, config=config) config.logger.info('Finish loading dataset') config.steps_per_epoch = ds.get_dataset_size() lr = get_lr(config) opt = nn.Momentum(params=get_param_groups(network), momentum=config.momentum, learning_rate=ms.Tensor(lr), weight_decay=config.weight_decay, loss_scale=config.loss_scale) is_gpu = ms.get_context("device_target") == "GPU" if is_gpu: loss_scale_value = 1.0 loss_scale = ms.FixedLossScaleManager(loss_scale_value, drop_overflow_update=False) network = ms.build_train_network(network, optimizer=opt, loss_scale_manager=loss_scale, level="O2", keep_batchnorm_fp32=False) keep_loss_fp32(network) else: network = nn.TrainOneStepCell(network, opt, sens=config.loss_scale) network.set_train() t_end = time.time() data_loader = ds.create_dict_iterator(output_numpy=True) first_step = True stop_profiler = False for epoch_idx in range(config.max_epoch): for step_idx, data in enumerate(data_loader): images = data["image"] input_shape = images.shape[2:4] config.logger.info('iter[{}], shape{}'.format(step_idx, input_shape[0])) images = ms.Tensor.from_numpy(images) batch_y_true_0 = ms.Tensor.from_numpy(data['bbox1']) batch_y_true_1 = ms.Tensor.from_numpy(data['bbox2']) batch_y_true_2 = ms.Tensor.from_numpy(data['bbox3']) batch_gt_box0 = ms.Tensor.from_numpy(data['gt_box1']) batch_gt_box1 = ms.Tensor.from_numpy(data['gt_box2']) batch_gt_box2 = ms.Tensor.from_numpy(data['gt_box3']) loss = network(images, batch_y_true_0, batch_y_true_1, batch_y_true_2, batch_gt_box0, batch_gt_box1, batch_gt_box2) loss_meter.update(loss.asnumpy()) # it is used for loss, performance output per config.log_interval steps. if (epoch_idx * config.steps_per_epoch + step_idx) % config.log_interval == 0: time_used = time.time() - t_end if first_step: fps = config.per_batch_size * config.group_size / time_used per_step_time = time_used * 1000 first_step = False else: fps = config.per_batch_size * config.log_interval * config.group_size / time_used per_step_time = time_used / config.log_interval * 1000 config.logger.info('epoch[{}], iter[{}], {}, fps:{:.2f} imgs/sec, ' 'lr:{}, per step time: {}ms'.format(epoch_idx + 1, step_idx + 1, loss_meter, fps, lr[step_idx], per_step_time)) t_end = time.time() loss_meter.reset() if config.need_profiler: if epoch_idx * config.steps_per_epoch + step_idx == 10: profiler.analyse() stop_profiler = True break if config.rank_save_ckpt_flag: ckpt_path = os.path.join(config.outputs_dir, 'ckpt_' + str(config.rank)) if not os.path.exists(ckpt_path): os.makedirs(ckpt_path, exist_ok=True) ckpt_name = os.path.join(ckpt_path, "yolov3_{}_{}.ckpt".format(epoch_idx + 1, config.steps_per_epoch)) ms.save_checkpoint(network, ckpt_name) if stop_profiler: break config.logger.info('==========end training===============') if __name__ == "__main__": run_train()3. 根因分析记录一下排查流程吧,用户使用了自有数据集,因此无法判断是代码异常还是数据集异常,先在相同的网络上跑coco公共数据集,发现不报错,排除代码问题,然后使用get_batch_size(),get_class_indexing(),get_col_names(),get_dataset_size(),get_repeat_count(),查看数据集是否争取加载,发现正确加载。排除图片问题。一次偶然的机会,偶然发现其实是数据集中描述图片标签的json文件损坏,某张图片未查找到,至此,问题排查结束。4. 解决方案重新换了一份json文件正确的数据集(非公开的)。5. 经验总结排查数据集问题,不止要注意对数据集目录结构、图像的问题排查,保存图像信息的json也很重要。
  • [问题求助] 图片审核异步提交的任务保留多久
    请问一下,图片审核异步提交的任务华为云这边保留多久,我这边很多超过半天的或者一天的任务去查就是404没找到任务
  • [问题求助] 原理图审核
    【Atlas200产品】【PCB设计】 原理图,申请帮忙审查,邮件已发送,谢谢请重点审查以下几个网络及其外围电路和第5页的电平转换电路。1. GE_PHY_RST_N2. GE_PHY_INT3. CLKREQB4. LANWAKEB5. HOST_RST_N6. PCIE_PERST_N7. ETH0_INTB8. GPIO_739. PERSTB10. PHYRSTB
  • [问题求助] 引脚连接问题
    您好,HOST_RST_N这个管脚的复位电路有没有参考设计可以参考呀
  • [问题求助] 关于RGMII PHY芯片中断引脚的连接问题
    您好,我用的PHY芯片是RTL8211F,其中,中断31脚手册 上写的是上拉3.3V,但是在Atlas200 的中断输入脚是1.8V的,可以把PHY芯片的31脚上拉3.3后接到Atlas的GE_PHY_INT输入脚吗​
  • [Atlas200] 【Atlas200产品】【PCB设计】 原理图,申请帮忙审核,谢谢
    【Atlas200产品】【PCB设计】 原理图,申请帮忙审核,邮件已发送,谢谢
  • [问题求助] 原理图审核
    之前发的审核帖子找不到了怎么办呀,重新发一个邮件和帖子吗
  • [其他] 内容审核-图像的应用场景介绍
    内容审核-图像有以下应用场景:视频直播在互动直播场景中,成千上万个房间并发直播,人工审核直播内容几乎不可能。基于图像审核能力,可对所有房间内容实时监控,识别可疑房间并进行预警。场景优势如下:准确率高:基于改进的深度学习算法,检测准确率高。响应速度快:视频直播响应速度速度小于0.1秒。在线商城智能审核商家/用户上传图像,高效识别并预警不合规图片,防止涉黄、涉暴、政治敏感类图像发布,降低人工审核成本和业务违规风险。场景优势如下:准确率高:基于改进的深度学习算法,检测准确率高。响应速度快:单张图像识别速度小于0.1秒。网站论坛不合规图片的识别和处理是用户原创内容(UGC)类网站的重点工作,基于内容审核,可以识别并预警用户上传的不合规图片,帮助客户快速定位处理,降低业务违规风险。场景优势如下:准确率高:基于改进的深度学习算法,检测准确率高。响应速度快:单张图像识别速度小于0.1秒。