News Articles

New Ocular Robotics PCL code sprint

Ocular Robotics        Open Perception

Ocular Robotics' range of RobotEye RE0x 3D Scanning LIDAR systems deliver unique capabilities in the ability to direct the attention of the scanned beam to the region of interest of the application at the resolution required by the application down to 0.01 degree increments on a moment by moment basis. Take a mobile robotics application where you may want 3 lines between zero to five degrees about the horizon for obstacle avoidance updated at 5Hz while travelling from one point to the next. When the robot arrives at its destination it may need to paint a wider region in order to interact with an object put 100, 1000, up to a maximum of 3000 lines across an elevation range of -10 degrees to +20 degrees elevation for example to give the point density required for the designated task. In fact RE0x scanners even enable the scanning of user definable rectangular regions anywhere in the 360 degree azimuth by 70 degree elevation range of the RE0x scanners.

Ocular Robotics and Open Perception plan to undertake a Code Sprint aimed at developing a simulator for the RE0x sensors to allow the…


PCL-ORCS - code sprint success!

Ocular Robotics        Open Perception

Pat Marion, an R&D Engineer at Kitware, has concluded his work on the joint Open Perception - Ocular Robotics code sprint.  The goal of the code sprint was to develop a pcl::Grabber driver interface for the  RE0x laser sensors and to develop visualization code capable of real-time display of point cloud data streams acquired by the grabber interface.

The pcl::RobotEyeGrabber is a new C++ class that was added to the pcl io module.  It implements the pcl::Grabber interface and provides access to point cloud data streams sent over the network by RE0x laser sensors.  A point cloud visualization application called RobotEye Viewer was developed using the RobotEyeGrabber and PCLVisualizer.  The application uses the RobotEye C++ API to send control messages to the RobotEye laser sensor and displays real-time point cloud streams acquired using the grabber interface.

The following video demonstrates the RobotEye Viewer application in action using a RE05 laser sensor:

 

Additional…


SwRI and NIST sponsor a new PCL code sprint

The Southwest Research Institute (SwRI) and National Institute of Standards and Technology (NIST) are sponsoring a new PCL code sprint! The efforts will be focused on developing algorithms for human detection and tracking, out of 2D camera imagery fused with 3D point cloud data (coming from ASUS XtionPRO cameras).

Interested candidates should submit the following information to jobs@pointclouds.org:

  • a brief resume
  • a list of existing PCL contributions (if any)
  • a list of projects (emphasis on open source projects please) that they contributed to in the past

This project requires good C++ programming skills, and knowledge of PCL internals.

3 new code sprints from Toyota and Open Perception

Toyota has been a long term supporter of PCL, and pretty much created the concept of code sprints for PCL, with the first PCL code sprint ever: TOCS! This year, we have partnered with our colleagues from Toyota again for a series of 3 new exciting code sprint projects:

  • Primitive shape (cylinders, spheres, cones, etc.) recognition in point cloud data
  • Segmentation/Clustering of objects in cluttered environments
  • 3D feature development and benchmarking

PCL-TOCS(#2) will run for 3 months during the spring of 2013. As always, interested candidates should submit the following information to jobs@pointclouds.org:

  • a brief resume
  • a list of existing PCL contributions (if any)
  • a list of projects (emphasis on open source projects please) that they contributed to in the past

This project requires good C++ programming skills, and knowledge of PCL internals.

Spectrolab PCL code sprint

It is our pleasure to announce a new code sprint from our host organization, Open Perception, and Spectrolab, a Boeing company. Spectrolab has expressed interest in connection with the potential possibilities that PCL offers, and we will be searching for outstanding candidates to participate in a new code sprint that involves the development of 3D Viewer Software in PCL, as well as an advanced sensor grabber for Spectrolab's SpectroScan3D LIDAR imager.

The sprint will run for 3 months in the spring of 2013. Potential candidates should submit the following information to jobs@pointclouds.org:

  • a brief resume
  • a list of existing PCL contributions (if any)
  • a list of projects (emphasis on open source projects please) that they contributed to in the past

This project requires good C++ programming skills, knowledge of PCL internals and a basic understanding of laser sensors and 3D visualization.

New code sprints from Honda Research Institute

After a successful first code sprint with HRI, it is our pleasure to announce the beginning of 4 new projects:

  1. labeling outdoor pedestrian and car data as ground truth (2 months)
  2. fast 3D cluster recognition of pedestrians and cars in uncluttered scenes (3 months)
  3. part-based 3D recognition of pedestrians and cars in cluttered scenes (6 months)
  4. stereo-based road area detection (2 months)

Please see the previous HRCS sprint announcement for more information. PCL-HRCS will run for 3-6 months during Q1 2013. Interested candidates should submit the following information to jobs@pointclouds.org:

  • a brief resume
  • a list of existing PCL contributions (if any)
  • a list of projects (emphasis on open source projects please) that they contributed to in the past

This project requires good C++ programming skills, and knowledge of PCL internals.

Leica Geosystems partnership

 

It is our immense pleasure to announce the beginning of a new partnership between Open Perception and Leica Geosystems, and a series of new code sprints: PCL-LGSCS!

We are looking for talented contributors that are willing to develop open source efficient compression mechanisms for organized and unorganized 3D point cloud data in two separate code sprint projects. In both cases, the ultimate goal is to achieve as much compression as possible, but we will require an analysis with respect to the de/compression speed that can be obtained. A requirement is to be able to process a few million points per second on a standard laptop. In addition we will analyze and compare different lossy vs lossless compression techniques.

The data sources are variate, from terrestrial to mobile and aerial point clouds. Besides XYZ coordinates we expect the datasets to contain an additional intensity and/or color per point. Each dataset will contain standard meta information, and in the case of lossy compression, we will need to specify certain error limits to be satisfied.

The organized data format consists of a series…

PCL-ORCS kickstart!

Ocular Robotics Open Perception

The joint Open Perception-Ocular Robotics code sprint is ready to start! The sprint will cover an efficient pcl::Grabber driver interface for the RE0x laser sensors, as well as various enhancements to our PCL visualization libraries to be able to handle both larger datasets as well as process data packets coming from these sensors faster. The developer working on the sprint is Pat Marion from Kitware. Pat is already a seasoned PCL coder and has contributed significantly to the iOS and Android port of PCL (see this for more information) and the PCL plugin for ParaView.

We would like to thank all the other candidates for their excellent proposals! We already started following up with each of them individually, as there are many more sprints to come!

[Note:…

PCL-VLCS kickstart!

Velodyne Open Perception

PCL-VLCS is ready to start! The sprint will cover a Plug-n-Play interface for the Velodyne HDL series to make these sensors much easier to use and the high-density point clouds easily accessible by developers. The developers working on the sprint are Keven Ring from MITRE and Kuk Cho from Korea Institute of Industrial Technology.

We would like to thank all the other candidates for their excellent proposals! We already started following up with each of them individually, as there are many more sprints to come!

[Note: the new blogging page for VLCS will be up within the next few days at http://pointclouds.org/blog/vlcs/.]

Velodyne code sprint

Velodyne Open Perception

Project Description & Motivation: Velodyne's HDL-32 LiDAR sensor features a 360° unmatched vertical field of view with a range of 100 meters and typical accuracy of ±2 cm. Rotating at 10 Hz, this sensor produces upwards of 700,000 points per second over an Ethernet connection in UDP packets from 32 individual lasers and sources. GPS information is also be included and used for clock synchronization. Capturing and converting these packets into usable points is something that, until now, customers in academia and industry, have done themselves with code integrated from their own research projects. Velodyne wishes to significantly expand the reach and audience for the HDL sensor products by making the these sensors Plug-n-Play -- attach the sensor to a system, load the drivers, and capture point clouds in PCL formats within minutes - thereby leveraging the significant and growing availability of PCL viewers.

The most apt description of this sprint is that we want to make the HDL series into the "Kinect- equivalent" for LiDAR systems -- a Plug-n-Play interface will make these sensors much easier to use and the high-density point clouds easily accessible by developers. The advantages…

  • 1
  • 2
  • 3
  • 4
  • >>
  • Last
  • Latest Posts

    Latest Comments