3DMV: Learning 3D with Multi-View Supervision

CVPR 2023 Workshop

Call for papers:   February 27th

Submission Deadline:   April 2nd

Workshop Day:   June 19th

Learning 3D with Multi-View Supervision @ CVPR Workshop 2023

This workshop is for multi-view deep learning. It covers various topics that involve multi-view deep learning for core 3D understanding tasks (recognition, detection, segmentation) and methods that use posed or un-posed multi-view images for 3D reconstruction and generation. Many of the recent advances in 3D vision have focused on the direct approach of applying deep learning to 3D data (e.g., 3D point clouds, meshes, voxels ). Another way of using deep learning for 3D understanding is to project 3D into multiple 2D images and apply 2D networks to process the 3D data indirectly. Tackling 3D vision tasks with indirect approaches has two main advantages: (i) mature and transferable 2D computer vision models (CNNs, Transformers, Diffusion, etc.), and (ii) large and diverse labeled image datasets for pre-training (e.g., ImageNet). Furthermore, recent advances in differentiable rendering allow for end-to-end deep learning pipelines that render multi-view images of the 3D data and process the images by CNNs/transformers/diffusion to obtain a more descriptive representation for the 3D data. However, several challenges remain in this multi-view direction, including handling the intersection with other modalities like point clouds and meshes and addressing some of the problems that affect 2D projections like occlusion and view-point selection. We aim to enhance the synergy between multi-view research across different tasks by inviting keynote speakers from across the spectrum of 3D understanding and generation, mixing essential 3D topics (like multi-view stereo) with modern generation techniques ( like NeRFs). The detailed topics covered in the workshop include the following:

  • Multi-View for 3D Object Recognition
  • Multi-View for 3D Object Detection
  • Multi-View for 3D Segmentation
  • Deep Multi-View Stereo
  • Multi-view for 3D generation and novel view synthesis
  • Submission TimeLine

  • Paper Submission start: February 27th
  • Paper submission deadline: April 2nd
  • Review period: April 4th - April 13th
  • Decision to authors: April 15th
  • Camera-ready posters: April 22nd
  • Call For Papers

    We are soliciting papers that use Multi-view deep learning to address problems in 3D Understanding and 3D Generation, including but not limited to the following topics:

  • 3D shape classification
  • 3D shape retrieval
  • language+3D
  • Multi-View for 3D Object Detection
  • Bird-Eye View for 3D Object Detection
  • multi-view fusion for 3D Object Detection
  • egocentric perspective for 3D object detection
  • indoor/outdoor scenes segmentation
  • part object segmentation
  • Medical 3D segmentation and analysis
  • Deep multi-view stereo
  • Inverse Graphics from multi-view images
  • indoor/outdoor scenes generation and reconstruction
  • Volumetric Multi-view representation for 3D generation and novel view synthesis
  • Nerfs
  • 3D Diffusions for 3D generation
  • Paper Submission Guidelines

  • We accept submissions of max 8 pages (excluding references) on the aforementioned and related topics.
  • The submissions can be previously published works (in the last two years) or based on new works.
  • Accepted papers are not archival and will not be included in the proceedings of CVPR 2023
  • Submitted manuscripts should follow the CVPR 2023 paper template (if they have not been published previously).
  • All submissions will be peer-reviewed under a single-blind policy (authors should include names in submissions)
  • PDFs need to be submitted online through the link.
  • Accepted papers' authors will be notified to prepare camera-ready posters to be uploaded based on the schedule above.
  • Schedule (June 19)

    Time Session Speakers Recordings
    {{tableData[currentCountry][0]}} Opening Remarks Organizers -
    {{tableData[currentCountry][1]}} Multi-View for 3D Segmentation Matthias Niessner -
    {{tableData[currentCountry][2]}} Multi-View for 3D Object Detection Charles Qi -
    {{tableData[currentCountry][3]}} Break - Poster Session
    {{tableData[currentCountry][4]}} Deep Multi-View Stereo Georgia Gkioxari -
    {{tableData[currentCountry][5]}} Multi-view for 3D Generation Ben Poole -
    {{tableData[currentCountry][6]}} Panel Discussion (includes all speakers) Organizers -


    Contact: abdullah.hamdi@kaust.edu.sa
    CVPR 2023 Workshop ©2023