This very "Under construction" page lists design guideline suggestions backed by logical reasoning that has often been proven in usability studies. Many GUI designers either ignore or don't know about many of these basic rules that should probably be common sense.
There are two basic approaches to usability within software user interfaces. Some designers only cater to one or the other, while some cater to neither and some just try to make it look ok without any regard for usability at all. Ideally, you need to provide an "easy to learn" and "easy to use" interface.
An "easy to learn" interface would be that which is very obvious to the user. He or she should instantly know what the interface is capable of and what every possible interaction is going to achieve. In general, you can make a user interface "easy to learn" simply by labeling buttons with text that the user already knows how to read and clearly explains what each button is going to do. It's easy for interface designers to assume that everyone will understand the UI as easily as the creator, but this is rarely the case. Also see "Always label buttons with text by default."
An "easy to use" interface would be something that is designed for speed and efficiency. Primary tasks would take very few interactions and would reduce the amount of kinetic energy needed to carry them out. Attributes of "easy to use" interfaces would include placing buttons where they can be activated with the least amount of movement and effort. For example, with a command line interface, a user can have both hands on the keyboard, quickly type a series of commands with very little effort and movement of the fingers, then press "enter" and be done. The computer carries out the commands right away. This is "easy to use" in that it's very efficient, but it is not "easy to learn" if the user does not know the syntax of the commands required to carry out the intended functions.
This may sound like a complicated question, but really it's pretty easy. First of all, prioritize an "easy to learn" interface and consistency with the rest of the system. Software functions should always be as instantly obvious to new users as possible. Otherwise new users won't use it. You can use consistency to make functions easier to understand. For example, all scrollbars within the system should look and function the same way. All menu buttons should look and function the same way. All keyboard shortcuts should be listed and visible in the same way. All application windows should look and function in the same way. This sounds obvious, but unfortunately many software designers ignore these advantages that are brought about through consistency within the system.
The default "easy to learn" interface should cater to the most average use case scenarios as determined by usability studies or other data metrics.
Now to add in the "easy to use" capabilities, we've got to make the UI customizable. Every user may have different priority tasks within your software. If my priority tasks are buried beneath 3 layers of menus, that's not going to be easy to use or efficient at all. You need to enable the surfacing and prioritizing of most-used functions for each user. This can be done with intelligent user interface designs, but it's probably best to allow the user to opt-in to customizing the interface herself for her own specific use case scenarios... especially considering good UI designs are so rare these days, it's difficult to imagine good "intelligent UI's" to evolve any time soon.
Users don't immediately understand ambiguous icon designs that are not already part of their learned language. Forcing users to learn something that could have been communicated with words that they already understand and learned in school is a barrier to entry and a waste of cognitive energy. This has been shown to be true in many usability studies over the decades yet software designers continue to ignore this.
Some say that icons are better than text since they don't pertain to any particular language and people can learn them no matter what their primary language is after repeated use. This is true if repeated use ever occurs, and that's a big "if". By littering a user interface with unintelligible icons, you make it so nobody except the designer understands the interface without fairly extensive training and trial-and-error figuring-out. Furthermore, there is no dictionary or structured method of translating these made-up non-standard icons into functions or words that users actually understand.
The reason we call them hamburger buttons is as an insult to the designer's inability to clearly communicate the function of the interactive element.
Graphic user interface design trends change periodically as do the preferences of individual users. Users always prefer a consistent system-wide interface as this increases cognitive ease when performing tasks. Your user interface relevance will have more longevity if it follows the system theme and it will be easier to learn if it's consistent with the rest of the system.
Laziness, budget restraints, and time restraints are often some reasons to skip out on theme capabilities. Some companies want their specific apps to have their own unique "branding" that separates it from the system as a whole, but this is generally not user-friendly since it causes distractions, causes inconsistencies, and increases cognitive energy load.
Many GUI designers target one specific usage scenario and optimize the interface for these "most common" tasks instead of offering the user a choice to make their particular usage scenarios more efficient. All user interface designs should have a consistent method of customizing controls throughout the system. This is especially important on mobile apps used on devices with smaller screens and thus less room for showing interactive controls. If I never use the "delete" button in an email program, I should be able to replace it with a more-useful command that I do use more frequently. Most professional-level programs on desktops already include customization options for users to taylor the software to their needs. This is not true at all on mobile apps even though UI customization on mobile apps is probably much more useful.
Adding customization options increases the complexity of the software and it's more work to implement. The majority of normal users don't learn to customize an application's user interface because they are already spending enough cognitive energy trying to figure out the default user interface.
In almost all cases, a splash screen that shows the program's logo for a couple seconds is 100% useless to the user. We already know which program we're launching, because we activated its icon intentionally. The only thing the splash screen does (from a user perspective) is slow down her ability to actually use the program she is trying to use.
Don't waste users' time for branding self indulgence.
Very often we see splash screens that show nothing but an application icon full screen/window. Sometimes we see splash screens that show copyright information and version numbers. That's slightly more useful, but if you can't make the application launch right away when the user activates it, then give that information to the user so they know. If your software actually needs a few seconds to load the real user interface, show us a progress bar so that we can tell it's doing something.
Perhaps the useless splash screens mask loading functions that the software has to perform before it can display the actual user interface, and that's a valid reason, but you can make this much more informative by actually telling the user what is happening instead of being vague and uncommunicative about it.
During the 1990's boom for desktop personal computing, almost all software programs implemented access keys and keyboard shortcuts in a consistent, easy-to-use manner. On Windows, the Alt key would activate keyboard access for GUI menus and then typing a corresponding letter on the keyboard would activate that menu command. You could easily see which letter you needed to type because it was underlined. This keyboard access interface made it extremely easy to access GUI commands without moving your hands away from the keyboard AND without having to memorize command line interface syntax. You could also easily build motor memory for activating specific functions again without having to use a pointing device. Learning this one "underlined letter" concept for keyboard access to menus and commands meant you could use practically every program more efficiently.
Today, that simple & consistent visual cue for keyboard access in all programs has greatly been lost and that's a huge detriment to "easy to use" and "easy to learn" usability aspects.
There are Android devices and Apple iPad OS tablets that now support keyboards, but navigating them using a keyboard is very difficult, and that's bad design.
Many software development kits on different platforms probably don't have a built-in simple way of implementing keyboard access keys and shortcuts, therefore it's too much work to do it right.
For most software developers, good design is not a priority. That's why we're stuck with so many poorly designed programs. Making these programs both easier to learn and easier to use will reduce cognitive load when using them and increase the desire to use them.
Bad design in software doesn't really hurt anyone. It's not like bad environmental design where an unintelligible sign could confuse someone and point them down a path that can be physically dangerous... or bad weapons design where an ambiguous button could mean death.
So users can press the buttons!
Judging by the user interface designs of most apps and the operating systems of Google Android and Apple iOS systems as of 2019, most UI designers don't consider hand placement when designing interactive elements within apps. Most appear to assume that the users are standing still, holding the phone in one hand and poking at it with the other hand. Often you'll find menu items and buttons and even hidden notifications/action drawers at the top of the screen AND at the bottom of the screen. No human hand can reach all of those places while holding a touchscreen phone that's larger than 4" and just about all of them newer than 2015 have very large screens.
The two handed, "hold in one hand, poke with the other" interaction method is ok if you're sitting at a desk or on the couch, but mobile phones are meant to be mobile. The two handed interaction method falls apart very quickly if you're standing on a train where you need one hand to hold onto the handles, walking somewhere while carrying bags in one hand, driving a car, etc. UI designers often assume stationary usage instead of mobile usage even though we actually call them mobile phones.
If you're a UI designer looking at a large screen on your desktop or laptop computer and wondering where to place interactive elements for a smartphone UI, you can very easily figure this out by installing a drawing app on your phone, holding it with one hand and painting within the drawing app using your thumb without scooting your hand around or adjusting it to different positions.
For increased efficiency usability, a user's thumb should be able to reach all system and app controls while holding the device without having to reposition the hand or use a second hand. This rule of thumb should be common sense for smartphone UI designers, but it clearly is not.
Placing buttons at the top of the screen makes them easier to see since that's where most people look first due to our top-to-bottom left-to-right reading habits.
This is true, but don't you think being able to actually use the buttons would be more important than simply seeing them?
Most apps simply stretch the width and shorten the height of their interface when a user rotates a smartphone into the landscape orientation, but user interface designers forget that holding a phone in landscape mode has a completely different interaction method. The fingers required for interaction are in a totally different position compared to where they would be in portrait mode.
Resizing the UI width and height is much easier. Again, this is an argument based on laziness.
This goes back to the "easy to learn" concept. Touch gestures are often non-discoverable unless by accident. Hence, they require training and users often don't have the time or interest to go through training processes to use an app. If you add a left-edge swipe gesture that does something important, no user is going to instantly know that just by looking at the screen.
Gestures can be "easy to use" once the user does learn them. This is true and that's why it's good to implement them, but it's important to not rely on them since they are clearly not "easy to learn".
You can find a better "easy to learn" and "easy to use" balance by making gestures more discoverable by using buttons that have text labels explaining the use of gestures and then allowing the user to opt-in to removing the text labels (or tool tips) once they have learned the gesture commands.
Consistency is very important with gestures as well. Some app developers may assign one function to a left-to-right screen swipe gesture, but this gesture's function may be completely different when the user is on the home screen, and it may be completely different depending on how close to the edge the touch gesture begins. These inconsistencies can be extremely confusing and require increased cognitive energy to memorize and carry out.
There was a time when speech based user interfaces on mobile phones and computers were getting to be almost usable. Progress was kind of rebooted when iOS and Android became popular and things took a step backwards
Speech UI's are generally meant to be used when the user is unable or unwilling to interact with a computer/device with tranditional tactile hand-eye coordination.
(MORE TO COME)