Material: 3D model: Original image:

  • AdrianTheFrog@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    I used the UV project modifier to automatically project the image onto the model from the camera’s point of view while I made it. It’s not particularly hard, but it does take a fair amount of time to make the model.

    There is also a tool I used called FSpy that extracts the 3D coordinate space of the image, so that Blender’s axis align with those in the image.

    There are a few ai models that try to get 3D space from a 2D image (MiDaS is the most popular) but none provide nearly as good of results as doing it yourself.

    You only need to make a very rough model, just enough for some rough reflections, ambient occlusion, and occlusion behind objects in the image.

    I then added some lights over the emissive parts of the image, and threw some random models in there.

    • Thelsim@sh.itjust.worksM
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Thank you for the explanation! I think I understand it now.
      You project the image and then add some 3d models where necessary for the reflection and hiding objects behind. Then you add lighting and those objects.
      So it’s a bit like a combination of 3d model and an optical illusion like those “3d” street art drawings.

      3d is totally not my thing, but I do find it fascinating.

    • agamemnonymous@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Interesting! I was actually looking for a simple workflow to accomplish something similar, creating 3d models from AI generated images, but more for objects than environments. That might be slightly beyond the scope of this method, or at least so manually intensive as to be pointless.

      Thank you