当前位置: 首页 > news >正文

成华区建设局网站网络seo软件

成华区建设局网站,网络seo软件,正版香港免费资料手机网站大全,哪些做直播卖食品的网站目录 1. 下载dataset 2. 读取并做可视化 3. 源码阅读 3.1 读取点云数据-bin格式 3.2 读取标注数据-.label文件 3.3 读取配置 3.4 test 3.5 train 1. 下载dataset 以SemanticKITTI为例。下载链接:http://semantic-kitti.org/dataset.html#download 把上面三…

目录

1. 下载dataset

2. 读取并做可视化

3. 源码阅读

3.1 读取点云数据-bin格式

3.2 读取标注数据-.label文件

3.3 读取配置

3.4 test

3.5 train


1. 下载dataset

以SemanticKITTI为例。下载链接:http://semantic-kitti.org/dataset.html#download

把上面三个下载下来。

 

同级目录下解压

unzip data_odometry_labels.zip
unzip data_odometry_velodyne.zip
unzip data_odometry_calib.zip

解压后文件夹形式:

2. 读取并做可视化

import open3d.ml.torch as ml3d  # or open3d.ml.tf as ml3d# construct a dataset by specifying dataset_path
dataset = ml3d.datasets.SemanticKITTI(dataset_path='/home/zxq/data/kitti')# get the 'all' split that combines training, validation and test set
all_split = dataset.get_split('all')# print the attributes of the first datum
print(all_split.get_attr(0))# print the shape of the first point cloud
print(all_split.get_data(0)['point'].shape)# show the first 100 frames using the visualizer
vis = ml3d.vis.Visualizer()
vis.visualize_dataset(dataset, 'all', indices=range(100))

点云分割数据集SemanticKITTI

3. 源码阅读

3.1 读取点云数据-bin格式

SemanticKITTI的点云和标注数据都是二进制文件。

datatsets/utils/dataprocessing.py 

    @staticmethoddef load_pc_kitti(pc_path):  # "./000000.bin"scan = np.fromfile(pc_path, dtype=np.float32)  # (num_pt*4,)scan = scan.reshape((-1, 4))    # # (num_pt,4)# points = scan[:, 0:3]  # get xyzpoints = scanreturn points

3.2 读取标注数据-.label文件

    def load_label_kitti(label_path, remap_lut):label = np.fromfile(label_path, dtype=np.uint32)label = label.reshape((-1))sem_label = label & 0xFFFF  # semantic label in lower half inst_label = label >> 16  # instance id in upper halfassert ((sem_label + (inst_label << 16) == label).all())sem_label = remap_lut[sem_label]return sem_label.astype(np.int32)

3.3 读取配置

模型,数据集,流程配置都保存在ml3d/configs/*.yaml文件中。读取方式:

import open3d.ml as _ml3d
import open3d.ml.torch as ml3d # or open3d.ml.tf as ml3dframework = "torch" # or tf
cfg_file = "ml3d/configs/randlanet_semantickitti.yml"
cfg = _ml3d.utils.Config.load_from_file(cfg_file)# fetch the classes by the name
Pipeline = _ml3d.utils.get_module("pipeline", cfg.pipeline.name, framework)
Model = _ml3d.utils.get_module("model", cfg.model.name, framework)
Dataset = _ml3d.utils.get_module("dataset", cfg.dataset.name)# use the arguments in the config file to construct the instances
cfg.dataset['dataset_path'] = "/home/zxq/data/kitti"
dataset = Dataset(cfg.dataset.pop('dataset_path', None), **cfg.dataset)
model = Model(**cfg.model)
pipeline = Pipeline(model, dataset, **cfg.pipeline)

3.4 test

import os
import open3d.ml as _ml3d
import open3d.ml.torch as ml3dcfg_file = "ml3d/configs/randlanet_semantickitti.yml"
cfg = _ml3d.utils.Config.load_from_file(cfg_file)model = ml3d.models.RandLANet(**cfg.model)
cfg.dataset['dataset_path'] = "/home/zxq/data/kitti"
dataset = ml3d.datasets.SemanticKITTI(cfg.dataset.pop('dataset_path', None), **cfg.dataset)
pipeline = ml3d.pipelines.SemanticSegmentation(model, dataset=dataset, device="gpu", **cfg.pipeline)# download the weights.
ckpt_folder = "./logs/"
os.makedirs(ckpt_folder, exist_ok=True)
ckpt_path = ckpt_folder + "randlanet_semantickitti_202201071330utc.pth"
randlanet_url = "https://storage.googleapis.com/open3d-releases/model-zoo/randlanet_semantickitti_202201071330utc.pth"
if not os.path.exists(ckpt_path):cmd = "wget {} -O {}".format(randlanet_url, ckpt_path)os.system(cmd)# load the parameters.
pipeline.load_ckpt(ckpt_path=ckpt_path)test_split = dataset.get_split("test")# run inference on a single example.
# returns dict with 'predict_labels' and 'predict_scores'.
data = test_split.get_data(0)
result = pipeline.run_inference(data)# evaluate performance on the test set; this will write logs to './logs'.
pipeline.run_test()

3.5 train

import open3d.ml.torch as ml3dfrom ml3d.torch import RandLANet, SemanticSegmentation# use a cache for storing the results of the preprocessing (default path is './logs/cache')
dataset = ml3d.datasets.SemanticKITTI(dataset_path='/home/zxq/data/kitti/', use_cache=True)# create the model with random initialization.
model = RandLANet()pipeline = SemanticSegmentation(model=model, dataset=dataset, max_epoch=100)# prints training progress in the console.
pipeline.run_train()

http://www.yidumall.com/news/31947.html

相关文章:

  • 十大接单网站广州专业seo公司
  • 毕业设计做啥网站好最新域名查询ip
  • 网站建设投票系统设计soso搜搜
  • 海南建设培训与执业资格注册中心网站seo工具
  • 网站嵌入免费客服插件重庆百度推广
  • 网站后台链接怎么做seo网站诊断文档案例
  • 长春做网站哪家公司好软文平台
  • 注册了网站之后怎么设计中国十大企业培训机构排名
  • 做二手电脑的网站爱站长
  • 宿迁房产查询网上查询系统湖北seo公司
  • 岳阳市交通建设投资公司门户网站北京全网推广
  • 做网站的时候说需求的专业术语软文营销案例分析
  • 做死活题网站搜狐财经峰会
  • 哪里网站可以做微信头像百度账号注册中心
  • 网站建设会用到ppt吗seo站长平台
  • 科学城做网站公司上海十大营销策划公司
  • 微网站怎么做的好名字吗全渠道营销管理平台
  • 微网站制作工具深圳百度seo怎么做
  • 网页设计的标准尺寸东莞搜索网络优化
  • 太古楼角原网站建设网络客服
  • 自己建设一个网站zu97seo外链收录
  • 品牌建设与品牌价值seo基础课程
  • 泉州建行 网站培训课程名称大全
  • 有了域名之后如何做网站在线网页制作网站
  • 做国外批发网站哪个好百度竞价冷门产品
  • 金马国旅网站建设分析推广软文怎么写
  • 网站搜索框代码怎么做google排名
  • 怎么做游戏推广网站杭州网站优化平台
  • 网站认证收费吗百度投诉电话24小时
  • 做淘客网站要多大的服务器友情连接出售