Page 2 of 8

Re: New conversion algorithm

Posted: Wed Sep 11, 2019 11:36 pm
by sam
Okay, I have made the Lua program work in cmd-line in addition to GrafX2 (yes it can do both). Natively it can read 24bits BMP files without compression, but if you have imagemagick's convert command in the path[1], it'll be able to process almost any kind of images. For convenience, a fast Lua interpreter[2] is included in the ZIP.

Usage: <lua-interperter> oric_tst5.lua <filename>.<ext>
  • <lua-intepreter> is the lua interpreter you want to use (LuaJIT.exe for instance under windows)
  • <filename>.<ext> is the full path to the picture you want to convert
It'll produce a <filename>.sap next to the source file. It is an oric sap file with a basic loader.
I'll also produce a <filename>.sap.bmp file next to the source to allow easy viewing of the result without an oric emulator.

Typical usage under Cygwin:

Code: Select all

for i in c:/path/to/folder/*; do ./luajit.exe oric_tst5.lua "$i"; done
I might add later some command-line arguments to enable lauching the emulator on the generated tap file directly, or enable/disable the basic loader, or other things.
[1] I did not provide it in the ZIP because it is about 5Mb compressed and I'm not sure the forum can handle such a huge attachment. Pick the one from for windows (exe is ~12Mb, but it can convert almost all files to BMP and is stand-alone).
[2] LuaJIT.exe: a small 32bits exe for windows. Linux guys do usually have LUA already installed or can compile LuaJIT easily.

Re: New conversion algorithm

Posted: Thu Sep 12, 2019 2:43 pm
by sam
Now with the cmd-line version, I can process a lot of pictures and estimate the robustness of the algorithm :D

So far, it behaves nicely. Of course some pictures are not a "smooth" as I wanted, but I'm not sure if it is an issue with the algorithm or simply that the picture cannot be "smoothly" approximated with Oric's video constaints.

Anyway, here are random results of pictures that I used as reference when working on image conversion for the Thomson's machines[1][2][3] (which explains their 16:10 aspect-ratio).

Re: New conversion algorithm

Posted: Thu Sep 12, 2019 4:36 pm
by Yicker
Great work, very impressive.

Re: New conversion algorithm

Posted: Thu Sep 12, 2019 6:02 pm
by Dbug
Nice work, what I'm wondering though is why out of all the possible names in the galaxy, did the two persons who came up with decent conversion algorithm for the Oric have to be named *sam*!

In PictConv I've this:
-f6 => Sam method (Img2Oric)

so if I wanted to integrate this new Algorithm I would have to add something like:
-f8 => Sam method (lua san, not the same Sam)


Re: New conversion algorithm

Posted: Thu Sep 12, 2019 6:45 pm
by sam
That's why I usually log in as __sam__, but this forum didn't accept the leading underscore. SamDev is also an option, or Sam/Puls as I am from the Puls demo group.

As far as my tests go, the algorithm works nice. Nothing really bad is produced. It is, in the latest version provided above, good enough for general use. I should now consider porting it to C and maybe sign it as "Sam Sufi" (french joke -- sounds like "this is good enough") :P

From the PictConv sources on github, I found that the other sam's converter is named oric_converter_samhocevar.cpp. So calling mine oric_converter_samueldevulder.cpp would be possible. I'm not well at ease with github dev[1] (except for detecting bugs), but I might write a skeleton of that file with the algorithm and publish here for someone else to test & integrate into github or you can try to indicate me how to build PictConv with MSys2Portable.
[1] see this attempt to compile OSDK which miserably failed
Sans titre.png
which makes me look like this in front of my screen
Sans titre2.png

Re: New conversion algorithm

Posted: Thu Sep 12, 2019 7:40 pm
by iss
Congrats, sam! Last version works for me and the result is cool.

Just some minor, random, very-Linux-specific thoughts:

- use dos2unix to convert lua source line endings;
- if the first source line is: #!/usr/bin/env lua and chmod +x oric_tst5.lua then you can execute directly from bash:

Code: Select all

./oric_tst5.lua image_file.jpg
- ... and the 'biggest' problem appears - if no path is given then the output file(s) are created in the root / :shock: to fix this I'm using:

Code: Select all

local fullname = path ~= '' and (path .. '/') or '' .. tapname
- about: io.popen(convert,'rb') - unfortunately io.popen 2nd argument is platform depended and for Linux it must be only 'r' - because redirection is always 'binary';

- at end I've tried to use 'convert' to resize input input image to 240x200, changes:

Code: Select all

local convert_resize = 1     -- 1 to use convert to resize input to 240x200
      local convert = 'convert "' .. filename .. '" '
      -- use external resize
      if 0<convert_resize then
          convert = convert .. '-layers merge -resize 240x200 +repage '
      convert = convert .. '-type truecolor -depth 8 bmp:-'
      bmp = read_bmp24(assert(io.popen(convert,'r')))
From this original 'full-color' image the result is:
Lamborghini-Gallardo-9.png (11.33 KiB) Viewed 4280 times
There are some visual differences but I can't judge which one is better :)

Again well done!

Re: New conversion algorithm

Posted: Thu Sep 12, 2019 8:35 pm
by Dbug
Regarding the OSDK, the official version is actually on my SVN depot, the neko version is a fork where he is doing tests (he works on mac).

I'm going to let you all play with the lua version, and when you are happy, we can convert it to C or C++ and add it as a new method for pictconv, with whatever name you want :)

Re: New conversion algorithm

Posted: Thu Sep 12, 2019 11:12 pm
by sam

* Convert and internal resize do more or less the same. The difference with the internal version is that it does this in continuous linear-rgb colorspace, preventing issues with gamma-correction. Older version of "convert" do not respect the gamma at all, and newer version quantize the linear colorspace to 8 or 16 bits depending on the version. 8 bits is definitively not enough, 16 might be enough but introduce small quantization errors anyway. Lua working on doubles doesn't suffer from this. Now, the difference between the 2 pictures is very marginal.

* It would be nice to compare the result with the one from pictConv or pipi.exe to see if we get noticeable differences with the existing standards.

* About files being created at /, this is a bug in the code. I blindly add '/' between the path and the name even when path is empty, as you have noticed. Personally I was about to replace the empty path by "." to fix the issue.

[Edit 19/09] I have uploaded a new zip above. It includes fixes for all the things you reported (hopefully) and have some fine-tuned parameters that better suit my corpus (images are less noisy). For those who want to have the convert.exe tool in addition, there is a temporary ZIP >>there<< (

Re: New conversion algorithm

Posted: Mon Sep 23, 2019 2:59 pm
by waskol
It's soooooooooo impressive !!!! :shock:

Re: New conversion algorithm

Posted: Tue Sep 24, 2019 5:08 pm
by mikeb
It's more than impressive, it's almost some kind of magic trick.

I feel I've missed an explanation of what exactly is going on here (all the focus earlier is on tweaking dithering algorithms to improve the output).

I get that you can dither a full colour image down to a limited colour range (e.g. for GIF encoding or other reduced output devices) to try and "best represent" it using what's available, which is easy ti display if you can plot any of the reduced-but-available colours at any pixel.

It's the mechanics of creating graphics like that on Oric's HIRES mode that is not obvious to me, so I've stared at some of these pictures close up in GIMP, and I *think* the method on each line is :-

1) Selecting a paper/ink combination, giving two usable colours (fg, bg), e.g. BLACK, RED, and plotting individual pixels as normal.

2) After part of a line, either change FG or BG colour (rarely both?). Each time, this comes with a "penalty" of 6 pixels (or 12, if both) of all BG/all FG colour whether you want it or not, deviating from the the "correct" dithered output [*1]

3) Sometimes, get lucky by using the inverse bit to complement the colours without needing an attribute (giving you immediate use of e.g. WHITE, CYAN) at no cost.

[*1] And then let the dithering algorithm deal with the fact that you didn't necessarily produce the correct balance of FG/BG pixels at this point, so feed back this "error" to compensate on the lines above/below where possible.

Is this close to correct?

Re: New conversion algorithm

Posted: Tue Sep 24, 2019 7:25 pm
by sam
Yeah, you are close. Let me summarize the state of the art like this: Basically all oric conversion algorithms are brute force with a bit of optimization. Brute force is something like the following:
For each line do: for each of the 40 octet apply one of the following command:
  • use 2 color dithering with current bg/fg pair or 7-bg/7-fg pair
  • change bg (all pixels are bg or 7-bg)
  • change fg (all pixels are bg or 7-bg)
Evaluate the error and keep the line with the least error
This enumerates and select the best option between all the possibilities of oric-lines approximating the original picture. VoilĂ , that's all about brute force. Simple... at least in theory.

In theory? yes because actually the number of possibilities to examine is around 15**40 (> 1E+47) which is tremendous.

Sam Hocevar's reduces this number by not evaluating the full line, but only a small look-ahead from the current position. This reduces 15**40 to something like 40*(15**DEPTH) with DEPTH the look-ahead size (2 octets in libpipi). This reduces the search space to a few thousands cases giving very interresting results but sometimes clearly not optimal because the look-ahead is too small.

In my solution, I have found that some terms in the evaluation of the error can be neglected. This allows finding an approximation of the least-error values much much quicker (say in logarithmic time). The neglected terms are small enough for the result still be pretty good. Of course this isn't strictly the least-error solution, but who cares of being 1% away from the exact optimum when other methods are much worse by several orders of magnitude ?

So to sum up, Sam Hocevar find the exact optimum error over a (very) small subset of the search-space, wereas my algorithm find an estimation of the optimum, but on the full search-space.

Another small difference comes from the function evaluating the error. Sam Hocevar uses some empirically detemined formula around euclidean distance (combined with a bit of black magic of course) whereas I use an approximation of CIE's delta-E which is the state of the art in how color differences are perceived by the human eye. This is less fancy than empirical black magic, but the results prove this is worth it (though not as much as approximating the extremum value vs approximating the state-space).

Re: New conversion algorithm

Posted: Tue Sep 24, 2019 9:30 pm
by Chema
Impressive.... very impressive. Congratulations!

Re: New conversion algorithm

Posted: Wed Sep 25, 2019 5:40 pm
by mikeb
Thank you for the detailed explanation @Sam !

Trying to optimize the error over the whole picture, for every possible combination, is -- as you say -- brute force.

It does seem that errors in failing to correctly dither (due to being forced to emit 6 pixels of FG or BG on an attribute/colour change) can only really affect a small locality, and can't be corrected by doing anything in other parts of the picture, so it's a good optimization to look locally, and be satisfied!

It's not entirely clear -- is the look ahead/evaluate for least error only within ONE scanline (1D)? Is there any mileage in looking back and forward (1 or 2) scanlines (in 2D) to try and correct further -- especially where an attribute was dropped in. Or does that send the algorithm into an indecision loop? :)

Fantastic stuff.

Re: New conversion algorithm

Posted: Wed Sep 25, 2019 6:46 pm
by sam
Actually it isn't the whole picture which optimized, but a single line. This is already a big simplification of the problem since the state-space for a whole picture is (state-space-for-a-line)**200 which is something like 5E+9000 (15**8000) possibilities to test... This is far too big to have real human or computer meaning since whatever computing speed you take (say 1E+15 operations/sec), the result it still big (5E+8985). Actually this is still bigger that the expected lifespan of a proton so the computing-unit will disintegrate well before the computation ends ;)

Trying to optimize over 2 lines, might be tempting. I haven't thought about it yet. But at first sight, it looks like this is being much more difficult since the octets couldn't be considered as really independent. In my algorithm, I can neglect the single-pixel error (it is tiny) coming from the previous octet of the line, but in the case of 2 lines there are errors coming from the octet juste above the current-one, and this is 6 pixels which I don't think can be neglected anymore.

Re: New conversion algorithm

Posted: Thu Sep 26, 2019 7:36 am
by Dbug
Question: Given an actual Oric picture screenshot as an input (so something that is already using the right colors, but we don't technically know if that was achieved with paper, ink or invert tricks), is your algorithm able to reproduce a 100% visually identical image, or will there be some errors here and there?

Basically my question is: Can we use your algorithm as a kind of universal converter, which will dither and error compute things that are obviously not possible to display on an Oric, but will just keep "as is" whatever already match the Oric constraints?