Class Discussion Summary (Jan 28)

To-Do Date: Feb 5 at 11:59pm

Put the citations to papers discussed here:

Synthetic Depth of Field / Portrait Mode

Scribes:

<who were the scribes who worked on this>

Arik Yueh
Abdullah Al-Omari

Esmaeil Mirvakili

Yishu Wang

Alexander Cardaras

Yu Ji

Write the Class Discussion Summary Here:

  • (Easy) Does synthetic depth of field replace the need for big lens DSLR cameras?

No, the phones with the software cannot replace DSLR cameras due to their hardware limitations and inability to completely replicate the capabilities of the DSLR. 

Some examples of why include aperture differences, focal length, etc. I think that as technology evolves, there could be a day when mobile phone cameras do replace or directly compete with DSLRs.

  • (Medium) When does this fail? What are scenarios in which it will work poorly?

We think that edge cases are when the synthetic depth of field can perform poorly, or when the algorithm has a hard time telling where the different levels of depth are. This includes areas in which the CNN hasn’t been trained and perhaps low light conditions. The semantic segmentation dealt with one lens and used a convolutional neural network to determine what needed to be the main focus and what to blur out. On the other hand, the stereo algorithm used the two cameras to measure the depth of the image. 

  • (Hard) This paper uses dual-pixels. Many phones have dual-cameras. Will this work better? or worse?

We think that applying the dual pixels for capturing an image pair to the dual cameras and then applying the stereo algorithm along with the semantic segmentation will calculate a depth map that will be superior to either on their own. And this can create a decent result for the synthetic depth of field.

 But in terms of profitability and the environment, it could be worse because of an increase in manufacturing costs, power consumption and more space in the phone, along with its impact on the environment.