1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

REQUEST hyperion source from cam in front of tv - prepare input region

Discussion in 'Feature Requests' started by giovanne, 2 January 2018.

  1. The

    The New Member

    Messages:
    23
    Hardware:
    RPi2
    Flovie, I was talking to a coworker about a similar concept on the way back from a tech conference this past Friday. I agree with most of your pros/cons.

    Since I'm not familiar with the "proto" approach, I'd think I'm going to explore this V4L2 virtual camera in combination concept with the OpenCV fisheye undistort. I'm completely unsure of the performance impacts to the video stream. I'll play a bit with it, but I haven't purchased a true fisheye lens camera yet.

    Regarding "proto", are you talking about protocol buffer? Can you provide a hyperlink please?
     
  2. The

    The New Member

    Messages:
    23
    Hardware:
    RPi2
    I explored the V4L2 virtual camera in combination concept with the OpenCV fisheye undistort options. While I'm not 100% certain, they both look to be written for more of a still image scenario. Digging a bit more, I ran into GStreamer which seems to have some promise. It does look like it exposes a dewarp function based on opencv. I've run out of patience for today, and remain unconvinced I'll spend more time digging into this.

    Interesting:
    Those install instructions didn't work for me, and I gave up. The raspivid command looks like it's piping video through gst-launch-1.0. That gst-launch smells like it has a way to construct a pipeline...maybe that dewarp can be used in some obtuse gst-launch command to construct a "pipeline" to surface a dewarped video stream originating from the camera without constructing a special gstreamer plugin or writing code. I'm only 5% certain of that statement.

    Other examples:
     
  3. Flovie

    Flovie New Member

    Messages:
    7
    Hardware:
    RPi3
    I am not sure if gstreamer is even required. Because eventually you need only to calculate a dewarped image and send this via protobuffer to the hyperion instance, so you don't need to transform the image back to a virtual camera. Here is an example by the hyperion wiki how to work with the protobuffer:

    https://hyperion-project.org/wiki/Protobuffer-Java-client-example

    However, it is based on Java. A few years ago, I worked on a music visualizer with hyperion. It was also written in Java and used the protobuffer to control the ambilight. It is still working pretty good. I will consider to publish the code, maybe it helps to understand the protobuffer.
     
  4. The

    The New Member

    Messages:
    23
    Hardware:
    RPi2
    Yeah, that example is pretty simplistic in that it doesn't really process a "real" image or video stream. As I don't fully understand the hyperion API, the documentation and source code aren't exactly clear if/who is responsible for obtaining an image for processing. I'd have no idea on how to handle the dewarping math and would have to rely on a library such as opencv. This StackOverflow post seems to delve into some of this and issues with dewarping a true 180 fisheye into a usable image. This was fun to look into but I'm running out of motivation due to many potential roadblocks.
     
  5. Andrew_xXx

    Andrew_xXx Software Developer

    Messages:
    19
    In the current state of hyperion the easiest way of using a 180 fisheye image would be to capture the camera image, de-fisheye it by own software and then send to hyperion.
    As of setup the easiest would be to have a virtual camera endpoint so u can send the de-fisheyed image to it to pick up by hyperion and at the same time rpi would capture the fisheye image from real camera, this way everything is done on the same rpi, it has 2 cameras input, one real, one virtual and it doesn't need any hack in hyperion.
    And as it seems using opencv would be the easiest way to de-fisheye, so maybe use python cv?
    And it goes like this, real camera video0, virtual camera video1, some python cv script capturing from video0, de-fisheyeing and sending to video0 and video0 is configured normally in hyperion as capture device, no need to use proto server or others.
    The only point of using proto server or others is when you would want to send the image from different device, but i assume it is preferred if it would work on single rpi.
    The only harder part is that python open cv script, to put it together.
     
    • Like Like x 1
  6. Andrew_xXx

    Andrew_xXx Software Developer

    Messages:
    19
    I have found an even easier way for fisheye 180 degree cams to work with hyperion, u can use single ffmpeg command to grab the camera image, correct the lens distortion and send it to a virtual camera. Still testing, not ideal, but it kinda works, will post the commands later.
     
    • Like Like x 1
  7. Flovie

    Flovie New Member

    Messages:
    7
    Hardware:
    RPi3
    This sounds like a good idea especially when you are on the same device. I also did some progress with opencv and protobuffer on a remote device for capturing the images. Since yesterday i have a working proof of concept, however, with a noticable latency. I need add some threading and will report back later with the source code.
     
    • Like Like x 1
  8. Andrew_xXx

    Andrew_xXx Software Developer

    Messages:
    19
    Im just afraid from my tests that opencv fisheye correction could be too resource intensive for rpi and also with opencv u need a camera setup procedure that involves calculating camera and distortion matrix with a calibration procedure that isn't easy to do, not possible to guess the matrix values or pick universal values, but ffmpeg has only 2 parameters so it can be even trial and error, it is much simpler.
    Also not all fisheye lenses are the same, not all are ideal high quality 180 degree.
     
    • Like Like x 1
  9. Flovie

    Flovie New Member

    Messages:
    7
    Hardware:
    RPi3
    Totally agree. I only have a perspective distortion, so the matrix is simplier and easy to handle in opencv. Nevertheless, I will also take a look at the ffmpeg approach because it appears very comfortable and should also support ip cams, which should fulfill my requirement of an external device for capturing. Thank you for pointing in this direction.
     
    • Like Like x 1
  10. Andrew_xXx

    Andrew_xXx Software Developer

    Messages:
    19
    Also it seems that ffmpeg has also more advanced methods to undisort fisheye than the basic lens correction filter, there also a remap filter and a new v360, both i could not test for now, but the v360 looks easy and promising.
     
    • Like Like x 1
  11. Andrew_xXx

    Andrew_xXx Software Developer

    Messages:
    19
    For those that need this

    Perspective correction using ffmpeg for hyperion

    Do note that this will not work for fisheye or barrel lenses, u need a normal flat camera. If it is only slightly barrel lens, then maybe it would work with little crop. Do also note that i do not have rpi currently or camera, everything was tested on a virtual machine with Raspberry Pi Desktop and some things on ffmpeg windows and also virtual cameras.

    It uses an additional virtual camera on a RPI that hyperion uses, ffmpeg is grabbing the real camera stream, corrects the perspective and sends it to that virtual camera.

    Steps are

    1. Install virtual camera software: v4l2loopback-dkms
    2. Then run sudo modprobe v4l2loopback, this creates an additional virtual camera, if your real camera is video0, the virtual one will be video1
    3. Configure hyperion to use video1 as source
    4. Grab real camera stream and use ffmpeg to correct the perspective and send it to the virtual camera, the command is
    ffmpeg -re -i /dev/video0 -vf "perspective=382:127:1563:91:387:761:1495:986" -map 0:v -f v4l2 /dev/video1
    5. Thats it, after running this command the virtual camera will stream a perfectly flat rectangle image for hyperion to use as long as that command is running

    Remember this is just one time configuration, those command will not run automatically every time, if u want it to work every time rpi boots up, you need to run commands 2 and 4 from startup script.

    Additional information

    How to know what values to put in the perspective filter?

    - You can get the values using any image software that will show you the exact pixel location on the image like XnView, put your camera in a position it will be always used, do a camera screenshot, get the pixel locations in the corners of your tv, the order in the perspective filter is: top left, top right, bottom left, bottom right, so it will be 8 numbers in total. If u change your camera position, u will need to repeat the procedure again.

    What if my tv is just a small portion of the camera image?

    - If there are large portions of the image without tv u can cut the borders using crop before using perspective, but then u need to make a screenshot with crop alone and get the pixel pos for perspective from it, ffmpeg -re -i /dev/video0 -vf "crop=w=100:h=100:x=0:y=0,perspective=382:127:1563:91:387:761:1495:986" -map 0:v -f v4l2 /dev/video1

    What if i have a slight camera distortion, positive or negative?

    - If your camera have a slight distortion you can try to remove it with lens correction filter, the most popular distortion is barrel, type of distortions

    [​IMG]

    The command is
    ffmpeg -re -i /dev/video0 -vf "lenscorrection=k1=-0.0:k2=0.0,perspective=382:127:1563:91:387:761:1495:986" -map 0:v -f v4l2 /dev/video1

    You need to trial and error the k1 and k2 values, there are in -1,1 range, lens correction wont work for fisheye 180 lenses, at least from my tests i couldn't find values where the image is fine, but in some instances it could make it maybe a little better but probably still unusable.


    I have put and example media if you want to just test it, it consist of short sample tv angle video and a screenshot from it
    https://www120.zippyshare.com/v/oEKEnVCm/file.html

    Those values that are put into the ffmpeg command perspective filter in the steps are for this sample video.
    U can quickly test values yourself or with your own video with the command
    ffplay "sample tv angle loop.mp4" -loop 0 -y 980 -vf "perspective=382:127:1563:91:387:761:1495:986"
    For own video you just need to change the perspective values and/or add crop if it is needed.
     
    Last edited: 16 February 2020 at 14:02
    • Like Like x 1