iphone camera ruminations

Had a chance to digest some reviews of the iPhone 15 Pro cameras, and a lot of positive changes have hit this year. The non-Max Pro seemed largely static at first, but software advances outpaced hardware this time around. That means a host of improvements across the line. Colors are more natural, more megapixels outside of RAW modes, smudgy noise-reduction giving way to more natural photographic grain, automatic portrait mode, new configurable focal length breakpoints, and largely just all kinds of work that shows photography is to Apple today what music was in the heyday of the iPod.

While considering these many software changes, a potential basis for the diminished veil of secrecy around Apple’s future phones occurred to me. While it’s always been true that hardware is locked-in many months before an iPhone announcement and launch, owing to things like supply chains and manufacturing pipelines, the deep focus on photography would seemingly require stretching these timelines further.

It’s been ages since we first heard “machine learning” pop in as a talking point in an Apple Event, but it’s clear from recent emphasis that ML hardware and algorithms play a key role in their products. Nowhere is that more plainly seen than the camera pipeline. The success of an ML model relies on training data that is highly representative of what the algorithm might encounter in performing its task. For iPhones to rely on these models in the core systems governing that shutter button and its trillions of descendent operations, they need to train on data from exacting specimens of the hardware they will operate on. That means bringing future-spec hardware to places you might expect people would take pictures, and taking thousands (likely thousands upon thousands) of images so you can train models to a level of confidence customers would expect from a function of our phone tasked with capturing our memories.

For a QA process to back a product shipping tens of millions of units on day one, you can’t truly validate ML algorithms or pipeline stages downstream until you have a strong candidate dataset to train on. That means a massive block of work that can’t earnestly begin until data is in the hands of engineers. The procedural pipeline that makes this happen will likely improve over time, but I don’t expect true surprise from camera hardware revelations in iPhone Events any time soon.

This all calls back to the many software changes for iPhone 15 Pro. Carrying over the “Wide” (24mm equivalent) lens/sensor package from last year definitely provided more full opportunity to push greater value in software from the existing hardware, having no scarcity of data for modeling. Will Apple fall into a tick-tock between hardware and software on iPhone cameras?

October 5, 2023 at 4:29 pm

@skoda on App.net @technochocolate on App.net