Quicksilver: Four Things and a Lizard

Talk about Pixelmator Pro, share tips & tricks, tutorials, and other resources.
User avatar

2018-10-20 18:40:05

(Pixelmator Pro 1.2 Quicksilver)
Late last night I created an image and uploaded it to twitter. Today I looked at it and it's sludgy and dull. Here's what you can learn from my mistakes (and I can vent a bit about screwing up):
1. Ambient light/environment has implications. You know it's obvious, I know it's obvious and we've both known this for years. I still fell for it, though. I edited late at night with few lights on in my flat and my screen (and therefore the image) just looked deceptively bright.
2. Dark mode/night mode has implications. Like a lot of the Apple world, I'm playing with dark modes. The mode on my Mac changes with time of day and mood. Websites, too. All except YouTube: that looks better in its 'Dark Theme' no matter the time of day. Point is, when I posted to twitter, I had twitter set to 'Night Mode' which made the image look less sludgy. I might have noticed the problem immediately had I posted with twitter in its lighter colour scheme.
3. Find a mid-grey reference. Recently (certainly I only noticed it in 1.2) Pixelmator Pro gained the ability to change the colour of the bit that surrounds the canvas (it's in Preferences, called Window Background). Do yourself a favour and set it to custom then a neutral 50% grey. If your eyes are anything like mine, you will initially hate the way it looks. Fight the impulse to change it straight back and tell youself that this is a really good way to check both the brightness and the colour-balance of your image. Now we can argue whether this should be half-way along the grey slider or 50% RGB (yes, they are different), but whatever you pick make sure it's a neutral grey and stick with it. It will start to look better over a couple of hours use and will help you adapt to different environments.
4. ML Enhance works. I imported my sludgy export back into Pixelmator Pro and clicked the ML Enhance button*. The result was a little bit too bright for my taste. Just a little, though and way better than my first export. I dialed back the Exposure a tiny bit. I posted it again on twitter as a reply to the original image - my followers must love that - pinned it and switched again to 'light' mode. On reflection, it could probably do with being a shade lighter so I'd say that ML Enhance got it right.**
5. A Lizard (🦖). There's always time for a Doctor Who reference.

Hope this helps someone. Venting has helped me. I feel a whole lot better. :grinning:

- Stef.
* I couldn't apply ML Enhance to a group so had to flatten my image to use it. Is this a bug?
** I've pressed the button a few times and have got different settings out of it. I think that the machine learning in Pixelmator Pro is using an algorithm with some random element in it such as Monte Carlo, simulated annealing, or something like that (edit: no it's not. See Anton's explanation below).
User avatar

2018-10-23 11:30:11

st3f, talking about different adjustments you got from ML Enhance.

There is no randomisation in ML Enhance. The result depends on:
1) The GPU used for neural network processing — CoreML may decide which GPU to use for each particular neural network invocation. This way you typically will have 2 different results — first computed on built-in GPU, second and others — on discrete one.
2) The initial adjustments you have. This one is interesting and I will discuss it below in detail.

The same way ML Enhance is giving you starting point for further adjustments, you can give starting point to ML Enhance. This was done deliberately, because same image can be adjusted in different ways, each of which looks good. So, you can suggest to ML Enhance which style you prefer, by using some initial non-zero values of color adjustments. For example, you can increase Saturation prior to pressing ML Enhance button. This way you inform auto-adjustments engine that you prefer saturated images.

Please note that for performance reasons some settings are invisible to ML Enhance and will be simply ignored by auto-adjustments engine. For example, Levels and Curves are ignored. What is surprising here is that Vibrance is also invisible to Auto Adjustments, so to avoid confusion it always set to zero when you press ML Enhance.
User avatar

2018-10-23 11:32:09

P.S. Anton's our resident scientist who works on all these algorithms. :wink:
User avatar

2018-10-23 12:08:50

Hi Anton (and hi Andrius).

Thanks for taking the time out to explain this. The ability that ML Enhance has to take the settings you put in and run with them will change the way I drive Pixelmator Pro so it's really good to know. It's going to be interesting throwing some big colour shifts and bumping up the saturation to see what it does.

Unfortunately that will probably have to wait. I'm stupidly busy for the next week and then just busy for the month after that. I'll try and fit some time in to play with this when I can, though.

I hope you don't mind if I throw another question your way: Is there anything in the algorithm that would prevent it from being applied to a group rather than a layer? A lot of work that I do is generating images entirely within Pixelmator Pro (rather importing a photo and working on that) so my work is often a long and complex hierarchy or grouped layers. I would help me if I could use ML automation on the top enclosing group rather than exporting and re-importing.

Thanks again for the explanation.

- Stef.