Friday, February 26, 2016

DynamicImage

The user experience of imagery on the web is terribly bad. When I am trying to find images of the Bullet Cluster for my son, I see all the ways in which the paradigm is just crap. It is due to various factors, including technical and legal.

From the technical angle, I hereby propose some food for thought. I will write it somewhat tongue-in-cheek because I know there are religious wars over how to write a good use case / user story / feature request / prd / kanban ticket / whateverthefudgeyoucallit.

As an end user.
When I get an image on my touch screen device.
I want to directly zoom it.
And have it support self-updating for arbitrary resolution as I zoom.
So NASA images don't look like pixellated crap.

This would probably require a new kind of HTML or JavaScript image concept. I will call it the DynamicImage for the heck of it. (Probably a terrible name. Always be ready to refactor / change / kill names.)

The main feature of the D.I. is that it supports interaction. As the visible area of the image on screen changes due to e.g. user manipulation, the image can change what it is showing. As I zoom in, in the 640x480 area on my little phone, the image dynamically loads new data so that I get a high res zooming experience, rather than lame bitmap scaling on the client.

Maybe the D.I. supports an API call like (pseudo Java):
ImageInterface getRegion( Rectangle region, Zoom zoom );
where ImageInterface has concrete classes like Jpeg or whatever; Rectangle is, you know, a rectangle; Zoom is pretty much an alias for a floating point number.

The idea is that the web browser will dynamically update the region as the user zooms in/out. It will hit whatever the back end is and get results.

(Of course, the web browser will not be a piece of stupid hateful lame broken pathetic crap like all browsers seem to be, c'est de soupirer, and will first do lame bitmap scaling so the user has immediate feedback and then will progressively refine and re-render as soon as it gets data from the back end. Of course, the back end will also not suck and know about interrupting itself if the request gets dropped before it is done because the user is a little impatient. Also, the back end will not suck because it will have done a cached mipmapping of the images so that it can take the closest one, do a quick and dirty lame bitmap scaling of it, and return that first so that the response is lickety split. Etc. UX matters.)


I have to point out that this won't solve the problem entirely because e.g. people want the e.g. Google Image results to drive folks to their web site most of the time probably, so they don't want Google's Image results to support this because then nobody would go to the site.

But of course, the truth is that from a UX perspective the end user doens't give a rat's ass in the immediate sense. They might care if all the web sites go dark and Google has no results to show in the end, of course, but that's not what my son is thinking when he's stabbing at my phone in sheer frustration trying to get the bloody image to zoom well. As soon as we see an image, that is the thing we want to interact with directly. We don't want to have to figure out "what happens if I click on this?" "What do I have to do in order to get the full rez version of this?" etc. (By doing such a good job at search, Google has successfully screwed it all up for everybody in some sense. Oh well.)

2 comments:

  1. So responsive images (https://responsiveimages.org) may sole part of the problem you're trying to solve. That technology is mainly aimed at content providers on the web (e.g. NYtimes) who want to be able to show the right image resolution for the right device / screen resolution in the right context. However, it also paves the way for a more fluid UI around zooming because use of responsive image means that you have multiple resolutions of that image sitting on a server and you use a tag to tell the browser the URL for each of those and what size each of them is. Make sense?

    ReplyDelete
  2. Thanks Dan, sounds like something that could well be leveraged into and end-to-end feedback loop where the whole UI and thus UX will be interactive and wonderful. Some day...

    ReplyDelete