Season Pass

After listening to the non-physical beings who call themselves Abraham, via Ester Hicks on YouTube, for the past few months, I started to feel perplexed as to why hadn’t won the lottery IF I am in my…

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




Design for Understanding

I worked with a group member to design three unique visualizations for a data set containing college graduation statistics. In designing our visualizations we assumed that we had the user’s attention and that they did not need training in using common charts. Our goal was to portray a subset of the overall data in a clear, unbiased and purely descriptive manner. We engaged in a series of ideation, design and user testing methods in order to develop and refine our user-centric visualizations. We approached every phase of the process with deliberate attention to detail and considered multiple strategic avenues for every design decision. Through this comprehensive process, we were able to gain valuable insights that contributed to constant improvements in our graphical representations of the data.

After selecting a data set with an abundance of information, we were tasked with determining which particular statistics we wished to display. Immediately we searched for intriguing relationships but were unable to find any compelling correlations. Therefore, we considered information about women’s presence in their respective college majors and median earnings relative to major. We concluded that visualizations signifying the valuation of certain college majors would be most interesting and allow for flexibility in illustrating the data.

Figure 1

Following this phase of finalizing preliminary details, we embarked on the brainstorming process. Initially, we sketched a multitude of ideas on individual sticky notes. After filtering the ideas and eliminating duplicate or unworthy candidates, I consolidated the premier designs into a single design document. Figure 1 depicts the refined brainstorm sheet for the descriptive visualizations. The designs are organized according to the level of interactivity where items number 1–8 do not incorporate interaction and 9 and 10 include user interaction as a primary functionality of the design.

We greatly benefitted from a collaborative brainstorming environment that encouraged creativity and a wide variety of ideas. The five-design sheet procedure was highly effective in facilitating the initial exploration process. The most critical component of this methodology was the constant evaluation and actionable steps after rapid idea generation. By filtering and categorizing we were able to simultaneously refine the direction of our design while producing new ideas. The use of post-it notes during this phase allowed us to manifest thoughts into physical collections. Since this step was tangible, it drove an action-oriented procedure and invited iteration. This significantly increased productivity and resulted in a more goal-oriented and organized workflow.

Figure 2

From this brainstorming activity, we determined the overall direction of our three designs by evaluating how certain layouts and styles favored our goal of communicating a descriptive and informative graphic. Immediately, we decided to pursue bar-chart for its ability to accurately depict relation among different elements. The primary purpose of this design was to illustrated variations in median salary among students graduating from different major programs. Thus, a bar chart is an exceptional tool for efficient comparison that does not take advantage of user’s perception to manipulate their understanding. In Kennedy Elliott’s, 39 studies about human perception in 30 minutes, she conveys that “bars were more effective in communicating comparative values than either circles, squares or cubes.” Since we were interested in maintaining simplicity in our design, bars seemed best-fit for our specific goal. Furthermore, this form of visualization begs users to make judgments about quantity rather than relationships or trends. By placing the bars in close proximity, user’s will subconsciously begin assuming the value of particular elements by using closely aligned reference marks. Figure 2 depicts our design sheet for a bar chart. We decided to attempt a slightly more nuanced approach due to the great number of different majors in our data set. We decided to group the majors by category and then find the mean earning among all the majors in each category. We plotted major category on the x-axis and median earnings ($) on the y-axis. Then, a click on any single bar would invoke a scatter plot to arise with information about just that particular major category. The scatter plot would depict the relationship for the number of people employed from a major to the total earnings. Each individual major would be represented by a square mark in the plane of the chart.

Figure 3

Another design we deduced from our brainstorm activity was a dispersed bubble plot not bound to any axes. This approach is not a traditional statistical representation but does maintain the minimalist style we aimed for. This design is illustrated in Figure 3, which features a predicted layout of the visualization. Each circle will be sized in relation to all of the other marks and proportionate to the earnings for that particular major. Majors with larger earnings will have a greater radius and those with lower earning will have decreased radii. Each mark will include center-aligned text which displays the total median earning for that major. This design will only feature text for the monetary value within each circle. We deliberately decided to omit any immediately visible information about the major name or other details in order to emphasize the median earning statistic. Therefore, it will be clear that the size of the circle and the earning amount displayed are directly proportional. The marks will also be color-coded with each containing a unique color that corresponds to a key. It is essential that we select a color palette that is easily discernable to the eye and exhibits significant contrast. However, if tracing information to and from the visualization and the key is too time-consuming, the user can hover over any given mark and a pop-up will be generated to show other relevant information. Although circles are not the most effective shape for portraying a change in magnitude, we believe that size coupled with the text in each mark will create a visualization that seamlessly prompts comparison and highlights variation in earnings by major.

Figure 4

Figure 3 details the final design in our compilation of graphs to communicate information about graduate earnings. This representation is a scatterplot with the number of people in a major plotted on the x-axis and earnings on the y-axis. This approach offers a unique perspective relative to our two other designs. The scatterplot is intended to depict a relationship between two factors, whereas our other designs are built strictly for comparative assessment. In creating this design we were extremely mindful of the size of the point cloud visible in the graphical plane. We were aware that greatly altering the scale of the graph and in turn, the magnification of the point cloud could skew the user’s perception. Humans are likely to associate higher dispersal of points with a lack of correlation and vice-versa. Also, we decide to color-code the individual points based on the major and include a corresponding key. This design decision was influenced by Lewandowsky and Spence’s findings that humans can most accurately distinguish differences in color in scatterplot symbols (Elliott, 2016). It is also essential that we find an optimal size for each mark in the graph. The marks must be large enough that they are naturally visible to the eye without covering too much surface area. If the points span too far in either the x or y-direction they could give the impression that the point denotes a range of numeric values rather than a single number.

A user testing the initial prototype

In conducting the first round of user tests we aimed to gain insight into the best style of marks on the use on the scatterplot. We initially reasoned that color variation would be most successful in distinguishing the marks. However, we also wanted to test the effect of size variation with without color difference. In designing our test, we created two identical scatterplots portraying the same subset of data while modifying only the style of points in the cluster. This testing layout allowed us to isolate the independent variable of point style and thus eliminate other components from interfering with the user’s experience.

Each user was tasked with completing the same series of simple exercises. First, the user was presented with both graphs visible in a side-by-side orientation on a laptop screen. I conducted a total of four user tests. For odd user tests (1 and 3) I then prompted the user to look at the visualization on the left of the screen, which contained color-coded points. Even user tests (2 and 4) were prompted to look at the graph on the right side first, which contained size varying points. I deliberately removed the titles from the graph in order to test the user’s ability to identify and summarize the information displayed without any influence. I refrained from explaining any information about the material. I then instructed them to briefly glance over the graph and provide a description or summary of their first impressions and to characterize the goal of the visualization. I then tasked them with finding the data point with the highest, lowest and median percentage of women as a share of the (depicted on the y-axis). Lastly, I instructed them to find the total number of women employed as a Business major.

Visualizations used for initial prototyping user tests

The figure above show the graphs used during the prototyping user tests. In all of the user tests, regardless of which graph they observed first, the user was able to accurately identify that that objective of the visualization was to illustrate the relationship about women employed in various professional fields. Thus, the choice of a scatterplot was clear in achieving the overarching goal of providing a descriptive visual representation of a data set. All of the users tasked with finding the total number of women employed as a Business major failed. They communicated that at some level they were unable to size variation of the squares. As they panned from the key to the cluster of points they were unable to properly identify point size that corresponded to the key. Furthermore, in several of the tests, users were not aware that hovering over a specific point displayed the name of the major. Due to the lack of emphasis of this interactive feature, users often guessed when searching for a data point with a specific major category. When completing the same task in the color-coded graph, users were able to locate the correct point among the entire cluster. However, this was not an efficient process as they continuously glanced at the key and tried to make a color comparison. Also, users complained that some of the colors in the key were too similar and could not be distinguished within the cluster. One user also voiced concern that the squares were too small and not easily viewable on the screen.

From a complete round of user testing, we received valuable information about our design and noted several key areas of improvement. This experiment served to confirm our initial speculation that change in color is more accurate in representing variation than a change in shape size. It was evident that users were unable to make accurate comparative assumptions about magnitude when presented with similar sizes squares. This is an unnatural and difficult cognitive task. Users faired much better when tasked with finding specific points in the color-coded cluster. However, in order to improve this design, it was necessary to correct the color spectrum and add increased contrast. This would prevent users from mistaking a data point for another point with a similar color. Also, we realized the limitations of small points and determined that increasing the size of each mark would contribute to a more pleasing visual. Lastly, we noticed the lack of efficiency with the use of a key that corresponds to colors. We figured this repetitive and time-consuming process of tracking a color from the key to the point cloud could be substituted with interactivity in our final design.

This user testing methodology was highly effective in uncovering information about our design that was initially overlooked. By creating an experimental model that isolated one particular design factor we were able to obtain information that we could translate into actionable revisions. Without this approach, the designer is likely to receive broad and often ambiguous information cannot be directly applied to improvements. Furthermore, we realized it is best practice to observe user experience rather than guide it. By refraining from providing preliminary detail the user can embark on a more genuine exploration. This prevents unconscious bias from impacting the results of a user test. We attempted to remain as removed as possible throughout the experience and allow the user to drive the course of their action. One aspect of our methodology I would change was the display format we initially presented the user. Throughout the entire test, the user could view both graphs simultaneously as they were positions next to each other on the same screen. With this particular design, the difference in the two charts was very apparent. Thus, without even receiving instruction from they may have been inclined to immediately judgment. Since I was directing them to only view one design at a time, it would have been more useful to only display the current graph. Ultimately, a testing format that focused on facilitation rather than guidance and coercion produced meaningful feedback that would be reflected in our final designs.

A user testing out final moving average bar graph

In the final design phase of the project, we encountered numerous technical hardships that inhibited our ability to fully develop the functionality of each visualization. Several of the interactive components in Vega failed to perform correctly when implemented with our data set. Despite development shortcomings, we were still able to make crucial adjustments that enhanced user experience. We approached the final design user testing in the same manner as the prototype testing in order to accurately gauge performance improvements.

The final design for the descriptive scatter plot included color-coded points rather than alternate sized points. We also increased the size of marks as several users mentioned would be helpful. Also, rather than displaying a collection of all the majors in the point cluster, we limited the number of points in the field. In order to minimize confusion, we computed an average for all of the majors within each category. This eliminated the issue of repeat colors. With fewer data points in the graphical plane, the user was able to navigate from the key to the cluster much more efficiently. When tasked with finding information for a specific category, response time was greatly decreased from the initial testing of the prototype. However, several users still noted a lack of contrast in several of the colors. Certain categories, such as Computers & Mathematics and Social Science were displayed as similar tones of yellow, which caused confusion among users. Selecting another color scheme to avoid this confusion would greatly increase the clarity of this visualization.

Final moving average bar chart visualization

In finalizing the bar chart we attempted to incorporate interactivity through layering a moving mean bar over the bar chart. This bar functions to position itself along the y-axis to represent the mean earnings for a selected region. However, technical issues during implementation resulted in the charts inability to accurately re-position the line after multiple different region selections. Instead, the horizontal line is fixed representing the overall mean earning salary for the entire data set. In the tests featuring this visualization, users were able to quickly identify the purpose of this horizontal bar. They also noted that it served as an added layer of comparison, which corresponds directly to the analysis tactic that this graph stimulates. When tasked with performing basic tasks such as find the highest and lowest earning category or declaring the median earnings for any particular major category, users arrived at solutions very quickly. Moreso, all the users were confident in their evaluations. However, users did seem to prefer the color-coded nature of the scatterplot. Several testers suggested that the bars contain unique and independent colors to add aesthetic value. Ultimately, this chart performed the best in communicating the primary principles of the extracted data.

Final bar chart visualization including all majors

In executing the design for our final descriptive visualization, we experienced significant difficulty implementing the floating bubble display illustrated in Figure 3. Due to our inability to deliver this visualization, we created another bar chart that featured all of the majors in the data set along the x-axis. This graph utilizes color to distinguish major category. Due to a large number of different majors, we attempted to minimize the scale for that only a small portion of data was visible at a time. However, the pan and zoom interaction in Vega could not accommodate the dynamic variation in scale that we desired. Therefore, we displayed all of the majors in a single graph with a scrollable pane. Users of this graph communicated the difficulty to read the vertically-oriented text on the x-axis as well as an inability to parse the bars due to such a large quantity. Furthermore, while many users favored color in our other, more simple design, users of this graph stated that the coloration is overwhelming and its purpose is not immediately apparent. Users did quickly notice that the bars were organized in decreasing order from left to right. They appreciated this feature as it greatly reduced search time. One user did note that they would have preferred if the majors were grouped according to a category. This would create more color unity, rather than having repeated colors throughout the entire horizontal spectrum. Ultimately, this design proved ineffective as it consolidated too much information in a relatively small field of view. The overwhelming nature of the design detached from its message and deterred the user from thoroughly engaging.

After conducting a significant reflection on our final design decisions, it is evident that a higher level of interactivity would have increased usability. We found that a high concentration of information in a confined area created a sense of disinterest in the user. Rather than feeling invited, they immediately perceived the graphics as challenging and requiring significant effort to decipher. In visualizations like the bar chart with a moving average line, users appeared much more pleased when there were clear metrics for comparison. We also determined that color is useful in select applications. We learned that there exists a threshold for the amount of color in a visualization. Too much can appear overwhelming and confusing to the viewer. The color must be used constructively in order to trigger cognitive behavior rather than just a tactic to increase beauty. In completing this design process in its totality, there were numerous advantageous phases. A productive brainstorming and ideation procedure was the most beneficial aspect of the experience. Robust planning contributed to a clear sense of direction and the awareness to iterate on our designs.

Final persuasive visualization

The group members part of our team tasked with developing the persuasive visualization encountered severe technical difficulties. Their resulting graph depicted median earning in relation to the unemployment rate among college graduated as seen in Figure . They too decided to convey emphasis through combining size and color. Again, they received feedback that overlapping colors created an unclear visual. However, users of their visualization favored variation in shape size, unlike the feedback received in our user test. Also, it would be beneficial for them to further synthesize the data and compute the averages for all of the majors within a major. This would greatly reduce clutter in the point cloud and generate significantly less overlap and redundancy. Ultimately, a simpler design in this instance would render a more effective outcome.

Add a comment

Related posts:

New Apps and Features released on the Zoom App Marketplace!

Learn about the most recent apps and updated apps released to the Zoom App Marketplace. Also, we share some of the new features available on Marketplace and for developers building apps!

Furia sobre ruedas

La autopista 25 de Mayo se alza imponente sobre Parque Chacabuco y lo divide en dos. Altera la fisonomía del lugar, pero eso no evita que haya distintas construcciones que se integren a la perfección…

Klachten over DigiD op eerste dag van belastingaangiften

Het lukt mensen woensdagmiddag iets vaker om in te loggen via DigiD om hun belastingaangifte te doen. Het systeem is echter nog steeds regelmatig overbelast. “Het zou kunnen dat als wij…