Something to consider is the memory usage for the images that you will be processing.
Given a 100×200 image – black and white.
That 100×200 array takes up roughly 20K of ram. If it is a color image, it would be 3 times that – because each color channel (RGB) is stored.
Again, we are talking about RAW image data – there are compression techniques that can be applied to reduce this space – but the trade off is speed as you will need to decompress the image.
Given todays machines – even a Raspberry PI Zero (512MB System RAM) working with RAW images should not be a problem.
When you get into the smaller MCU world – that is when things start getting tricky. We will hit upon all that later.
The array of the image does take up RAM, the larger the size, the more RAM, the larger the size, the longer it takes to process. The more processing the more power that is consumed.
There is always a trade off – but in todays world of high speed, more cores than you can hit with a stick, couple with gigs and gigs of RAM.
The argument is always…. But… I have a 12 core.. 128GB.. Blah blah… That is nice, you can process a lot of images in a short amount of time. But why not make it as efficient as possible. What if the goal was to process 10K images per second – looking for defects? Would something the size of 8256 x 5504 be over kill? I think it would.
Bigger is not always better.
Aim for a size that gets the job done.
Have a Project or Idea!?
I am Available for Freelance Projects
My skills are always primed and ready for new opportunities to be put to work, and I am ever on the lookout to connect with individuals who share a similar mindset.
If you’re intrigued and wish to collaborate, connect, or simply indulge in a stimulating conversation, don’t hesitate! Drop me an email and let’s begin our journey. I eagerly anticipate our interaction!