SIGCHI Conference Paper Format

Transcript

1 Gestures without Libraries, Toolkits or Training: A $1 Recognizer for User Interface Prototypes Yang Li Andrew D. Wilson Jacob O. Wobbrock Computer Science & Engineering Microsoft Research The Information School University of Washington One Microsoft Way University of Washington The Allen Center, Box 352350 Redmond, WA 98052 Mary Gates Hall, Box 352840 Seattle, WA 98195-2350 [email protected] Seattle, WA 98195-2840 [email protected] [email protected] ABSTRACT Although mobile, tablet, large display, and tabletop computers increasingly present opportunities for using pen, finger, and wand gestures in user interfaces, implementing gesture recognition largely has been the privilege of pattern matching experts, not user in terface prototypers. Although some user interface libraries and toolkits offer gesture recognizers, such infrastructure is often unavailable in design-oriented environments like Flash, scripting environments like JavaScript, or brand new off-desktop prototyping environments. To enable novice programmers to incorporate gestur es into their UI prototypes, we present a “$1 recognizer” that is ea sy, cheap, and usable almost anywhere in about 100 lines of code. In a study comparing our $1 recognizer, Dynamic Time Warping, and the Rubine classifier on user-supplied gestures, we found that $1 obtains over 97% accuracy with only 1 loaded template and 99% accuracy with 3+ loaded templates. These results were nearly identical to DTW and superior to Rubine. In addition, we found that medium-speed gestures, in which Figure 1. Unistroke gestures useful for making selections, users balanced speed and accuracy, were recognized better executing commands, or entering sy mbols. This set of 16 was used all three recognizers. We also than slow or fast gestures for in our study of $1, DTW [18,28], and Rubine [23]. ber of templates or training discuss the effect that the num examples has on recognition, the score falloff along INTRODUCTION recognizers’ -best lists, and results for individual gestures. N are increasingly relevant to Pen, finger, and wand gestures We include detailed pseudocode of the $1 recognizer to aid many new user interfaces for mobile, tablet, large display, development, inspection, extension, and testing. and tabletop computers [2,5,7,10,16,31]. Even some desktop applications support mouse gestures. The Opera ACM Categories & Subject Descriptors: H5.2. Web Browser, for example, uses mouse gestures to navigate [Information interfaces and presentation]: User interfaces – 1 As new computing platforms and and manage windows. . I5.2. [Pattern recognition]: Input devices and strategies are explored, the opportunity new user interface concepts Design methodology – . I5.5. Classifier design and evaluation for using gestures made by pens, fingers, wands, or other [Pattern recognition]: Implementation – Interactive systems . path-making instruments is likely to grow, and with it, General Terms: Algorithms, Design, Experimentation, interest from user interface designers and rapid prototypers Human Factors. in using gestures in their projects. Gesture recognition, unistrokes, strokes, marks, Keywords: However, along with the naturalness of gestures comes symbols, recognition rates, statistical classifiers, Rubine, inherent ambiguity, making gesture recognition a topic of Dynamic Time Warping, user interfaces, rapid prototyping. interest to experts in artificial intelligence (AI) and pattern matching. To date, designing and implementing gesture r es of all or part of this work fo Permission to make digital or hard copi recognition largely has been the privilege of experts in p ersonal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies these fields, not experts in human-computer interaction ear this notice and the full citation on the first page. To copy otherwise, b or republish, to post on servers or to redistribute to lists, requires prio r 1 specific permission and/or a fee. oducts/desktop/mouse/ http://www.opera.com/pr UIST’07 , October 7-10, 2007, Newpor t, Rhode Island, USA. Copyright 2007 ACM 978-1-59593- 679-2/07/0010...$5.00.

2 (HCI), whose primary concerns are usually not algorithmic, algorithms used in HCI? How does recognition improve as but interactive. This has perhaps limited the extent to which training examples increases? the number of templates or novice programmers, human factors specialists, and user How do gesture articulation speeds affect recognition? How interface prototypers have considered gesture recognition a N - do recognizers’ scores degrade as we move down their viable addition to their projects, especially if they are doing best lists? Which gestures do users prefer? Along with the algorithmic work themselves. answering these questions, the contributions of this paper are: As an example, consider a sophomore computer science user interfaces. Although this major with an interest in To present an easy-to-implement gesture recognition 1. student may be a capable programmer, it is unlikely that he algorithm for use by UI prototypers who may have has been immersed in Hidden Markov Models [1,3,25], little or no knowledge of pattern recognition. This neural networks [20], feature- based statistical classifiers includes an efficient scheme for rotation invariance; [4,23], or dynamic programming [18,28] at this point in his 2. To empirically compare $1 to more advanced, career. In developing a user in terface prototype, this student theoretically sophisticated algorithms, and to show may wish to use Director, Flash, Visual Basic, JavaScript or that $1 is successful in recognizing certain types of a brand new tool rather than an industrial-strength user interface gestures, like those shown in Figure 1; environment suitable to production-level code. Without a To give insight into which user interface gestures are 3. gesture recognition library for these tools, the student’s “best” in terms of human and recognizer options for adding gestures are rather limited. He can dig performance, and human subjective preference. into pattern matching journals, try to devise an ad-hoc algorithm of his own [4,19,31], ask for considerable help, ing paths delineated by users We are interested in recogniz or simply choose not to have gestures. interactively, so we restrict our focus to unistroke gestures that unfold over time. The gestures we used for testing We are certainly not the first to note this issue in HCI. Prior (Figure 1) are based on those found in other interactive work has attempted to provide gesture recognition for user systems [8,12,13,27]. It is our hope that user interface raries and toolkits [6,8,12, interfaces through the use of lib designers and prototypers wanting to add gestures to their 17]. However, libraries and toolkits cannot help where they projects will find the $1 recognizer easy to understand, do not exist, and many of today’s rapid prototyping tools build, inspect, debug, and exte nd, especially in design- may not have such resources available. oriented environments where ge stures are typically scarce. On the flip side, ad-hoc recognizers also have their drawbacks. By “ad-hoc” we mean recognizers that use RELATED WORK a predefined set of gestures heuristics specifically tuned to Various approaches to gesture recognition were mentioned [4,19,31]. Implementing ad-hoc recognizers can be in the introduction, including Hidden Markov Models challenging if the number of gestures is very large, since (HMMs) [1,3,25], neural networks [20], feature-based gestures tend to “collide” in feature-space [14]. Ad-hoc , dynamic programming [18,28], statistical classifiers [4,23] recognition also prevents a pplication end-users from and ad-hoc heuristic recognizers [4,19,31]. All have been own defining their gestures at runtime, since new heuristics used extensively in domains ranging from on-line would need to be added. handwriting recognition to of f-line diagram recognition. Space precludes a full treatment . For in-depth reviews, To facilitate the incorporation of gestures into user interface prior surveys [21,29]. readers are directed to $1 recognizer prototypes, we present a that is easy, cheap, and usable almost anywhere. Th e recognizer is very simple, For recognizing simple user interface strokes like those involving only basic geometry and trigonometry. It requires shown in Figure 1, many of these sophisticated methods are about 100 lines of code for both gesture definition and left wanting. Some must be trained with numerous recognition. It supports configurable rotation, scale, and examples, like HMMs, neural networks, and statistical position invariance, does not require feature selection or actical for UI prototypes in classifiers, making them less pr training examples, is resilient to variations in input which application end-users define their own strokes. These recognition rates, even after sampling, and supports high algorithms are also difficult to program and debug. Even only one representative ex ample. Although $1 has Rubine’s popular classifier [23] requires programmers to limitations as a result of its simplicity, it offers excellent compute matrix inversions, discriminant values, and recognition rates for the types of symbols and strokes that Mahalanobis distances, which can be obstacles. Dynamic can be useful in user interfaces. programming methods are computationally expensive and too sometimes flexible in matching [32], and although In order to evaluate $1, we conducted a controlled study of are possible [24], these improvements in speed it and two other recognizers on the 16 gesture types shown improvements put the algorithms well beyond the reach of in Figure 1. Our study used 4800 pen gestures supplied by most UI designers and prototypers. Finally, ad-hoc methods 10 subjects on a Pocket PC. Some of the questions we scale poorly and usually do not permit adaptation or address in this paper are: How well does $1 perform on user definition of new gestures by application end-users. interface gestures compared to two more complex

3 Previous efforts at making gesture recognition more 1. be resilient to variations in sampling due to the inclusion of gesture accessible have been through movement speed or sensing; oolkits. Artkit [6] and Amulet recognizers in user interface t 2. support optional and configur able rotation, scale, and [17] support the incorporation of gesture recognizers in user position invariance; interfaces. Amulet’s predecessor, Garnet, was extended atical techniques (e.g., require no advanced mathem 3. with Agate [12], which used the Rubine classifier [23]. matrix inversions, derivatives, integrals); mbined gesture recognition More recently, SATIN [8] co 4. be easily written in few lines of code; with other ink-handling support for developing informal pen-based UIs. Although these toolkits are powerful, they 5. be fast enough for interactive purposes (no lag); cannot help in most new prot otyping environments because allow developers and application end-users to 6. they are not available. “teach” it new gestures with only one example; Besides research toolkits, some programming libraries offer 7. N -best list with sensible [0..1] scores that return an APIs for supporting gesture recognition on specific are independent of the number of input points; platforms. An example is the Siger library for Microsoft’s provide recognition rates that are competitive with 8. Tablet PC [27], which allows developers to define gestures more complex algorithms previously used in HCI to Siger recognizer works by for their applications. The recognize the types of gestures shown in Figure 1. turning strokes into directional tokens and matching these With these goals in mind, we describe the $1 recognizer in tokens using regular expressions and heuristics. As with the next section. The recogni zer uses four steps, which are powerful; but they are not toolkits, libraries like Siger correspond to those offered as pseudocode in Appendix A. ist. The $1 recognizer, by useful where they do not ex contrast, is simple enough to be implemented wherever A Simple Four-Step Algorithm necessary, even in many rapid prototyping environments. Raw input points, whether those of gestures meant to serve as templates, or those of candidate gestures attempting to be THE $1 GESTURE RECOGNIZER recognized, are initially treated the same: they are In this section, we describe the $1 gesture recognizer. A resampled, rotated once, scaled, and translated. Candidate pseudocode listing of the algorithm is given in Appendix A. C are then scored against points each set of template points over a series of angular adjustments to C that finds its T i Characterizing the Challenge . Each of these steps is optimal angular alignment to T A user’s gesture results in a set of candidate points C , and i explained in more detail below. we must determine which set of previously recorded it most closely matches. Candidate and template points T i Step 1: Resample the Point Path template points are usually obtained through interactive As noted in the previous section, gestures in user interfaces means by some path-making instrument moving through a are sampled at a rate determined by the sensing hardware position-sensing region. Thus, candidate points are sampled and input software. Thus, movement speed will have a clear at a rate determined by the sensing hardware and software. effect on the number of input points in a gesture (Figure 3). mean that points in similar This fact and human variability will rarely “line up” so as to be easily comparable. and C T i Consider the two pairs of gestures made by the same subject in Figure 2. mark and triangle made by A slow and fast question Figure 3. subjects using a stylus on a Pocket PC. Note the considerable time differences and resulting numbers of points. Figure 2. Two pairs of fast (~600 ms) gestures made by a subject with a stylus. The number of points in corresponding sections are comparable even at different To make gesture paths directly labeled. Clearly, a 1:1 comparison of points is insufficient. movement speeds, we first resample gestures such that the points is defined by N M path defined by their original In examining these pairs of “pigtail” and “x”, we see that equidistantly spaced points (Figure 4). Using an N that is they are different sizes and contain different numbers of that N too low results in a loss of precision, while using an points. This distinction presents a challenge to recognizers. is too high adds time to path comparisons. In practice, we Also, the pigtails can be made similar to the “x” gestures 256. ≤ found N =64 to be adequate, as was any 32 ≤ N using a 90° clockwise turn. Reflecting on these issues and on our desire for simplicity, we formulated the following Although resampling is not particularly common compared criteria for our $1 recognizer. The $1 recognizer must: to other techniques (e.g., filtering), we are not the first to

4 use it. Some prior handwriting recognition systems have 2 SHARK also resampled stroke paths [21,29]. Also, the 2 is okes [11]. However, SHARK system resampled its str not fully rotation, scale, and position invariant, since gestures are defined atop the soft keys of an underlying stylus keyboard, making complete rotation, scale, and position invariance undesirable. Interestingly, the original SHARK system [32] utilized Tappert’s elastic matching 2 discontinued its use to technique [28], but SHARK Figure 5. Rotating a triangle so that its “indicative angle” is at 0° improve accuracy. However, in mentioning this choice, the (straight right). This approximates finding the best angular match. 2 paper [11] provided no specifics as to the SHARK ese techniques. We now take comparative performance of th Step 3: Scale and Translate this step, offering an evaluation of an elastic matching . After rotation, the gesture is scaled to a reference square technique (DTW) and our simpler resampling technique By scaling to a square, we are scaling non-uniformly. This ($1), extending both with efficient rotation invariance. will allow us to rotate the candidate about its centroid and safely assume that changes in pairwise point-distances are due only to rotation, not to aspect and C between T i ratio. Of course, non-uniform scaling introduces some - O -T CALE limitations, which will be discussed below. The S QUARE function in Appendix A gives a listing. S After scaling, the gesture is tr anslated to a reference point. For simplicity, we choose to translate the gesture so that its RANSLATE RIGIN -O O -T ̄) is at (0,0). The T x ̄, y centroid ( function gives a listing in Appendix A. A star gesture resampled to Figure 4. N =32, 64, and 128 points. - M To resample, we first calculate the total length of the Step 4: Find the Optimal Angle for the Best Score –1) gives the length of N point path. Dividing this length by ( and templates C At this point, all candidates T have been i , between I N each increment, new points. Then the path is treated the same: resampled, rotated once, scaled, and stepped through such that when the distance covered translated. In our implementations, we apply the above exceeds I , a new point is added through linear interpolation. steps when templates’ points ar e read in. For candidates, we function in Appendix A gives a listing. ESAMPLE The R apply these steps after they are articulated. Then we take and ECOGNIZE Step 4, which actually does the recognition. R At the end of this step, the candidate gesture and any loaded its associated functions give a listing in Appendix A. templates will all have exactly N points. This will allow us . N =1 to [ k k ] for to measure the distance from C [ k ] to T i is compared to each stored C Using Equation 1, a candidate d to find the average distance between template T i i Step 2: Rotate Once Based on the “Indicative Angle” corresponding points: With two paths of ordered points, there is no closed-form N 2 2 solution for determining the angle to which one set of () () − C k T k C k T k [ ] [ ] [ ] [ + − ] ∑ x i x y i y points should be rotated to best align with the other [9]. = 1 k (1) d = i Although there are complex techniques based on , moments N these are not made to handle ordered points [26]. Our $1 path-distance C and T . between , the d Equation 1 defines i i over the space of possible algorithm therefore searches with the least path-distance to is the C T The template i angles for the best alignment between two point-paths. * is d result of the recognition. This minimum path-distance i Although for many complex recognition algorithms an converted to a [0..1] score using: iterative process is prohibitively expensive [9], $1 is fast * enough to make iteration useful. In fact, even naïvely d (2) i score 1 − = rotating the candidate gesture by +1° for 360° is fast 2 2 1 size size + enough for interactive purposes with 30 templates. 2 However, we can do better than brute force with a “rotation is the length of a side of the reference size In Equation 2, trick” that makes finding the optimal angle much faster. square to which all gestures were scaled in Step 3. Thus, the First, we find a gesture’s , which we define indicative angle denominator is half of the length of the bounding box x the centroid of the gesture ( ̄, y as the angle formed between ̄) diagonal, which serves as a limit to the path-distance. and the gesture’s first point. Then we rotate the gesture so , the result of each When comparing C to each T i O -Z -T OTATE ERO that this angle is at 0° (Figure 5). The R comparison must be made using the best angular alignment function in Appendix A gives a listing. An analysis of $1’s once using their T and C . In Step 2, rotating T and C of i i rotation invariance scheme is discussed in the next section.

5 indicative angles only their best angular approximated alignment. However, may need to be rotated further to C . Thus, the “angular space” find the least path-distance to T i must be searched for a global minimum, as described next. An Analysis of Rotation Invariance As stated, there is no clos into ed-form means of rotating C such that their path-distance is minimized. For simplicity, T i we take a “seed and search” approach that minimizes iterations while finding the best angle. This is simpler than the approach used by Kara and Stahovich [9], which used polar coordinates and had to employ weighting factors Figure 6. Path-distance as a function of angular rotation away ces from the centroid. based on points’ distan from the 0° indicative angle (centered y-axis) for (a) similar dissimilar gestures. (b) gestures and After rotating the indicative angles of all gestures to 0° C (Figure 5), there is no gu T arantee that two gestures and i Since there will be many more comparisons of a candidate ’s C will be aligned optimally. We therefore must fine-tune to dissimilar templates than to similar ones, we chose to use is minimized. As T ’s path-distance to angle so that C i a strategy that performs slightly worse than hill climbing by +1° for C mentioned, a brute force scheme could rotate for similar gestures but far better for dissimilar ones. The all 360° and take the best result. Although this method is [22], an efficient strategy is Golden Section Search (GSS) guaranteed to find the optimal angle to within 0.5°, it is algorithm that finds the minimum value in a range using the unnecessarily slow and could be a problem in processor- φ =0.5(-1 + 5). In our sample of 480 similar √ Golden Ratio intensive applications (e.g., games). gestures, no match was found beyond ±45° from the 2 indicative angle, so we use GSS bounded by ±45° and a 2° We manually examined a stratified sample of 480 similar threshold. This guarantees that GSS will finish after exactly gesture-pairs from our subjects, finding that there was 10 iterations, regardless of whether or not two gestures are d no local minima in the always a global minimum an similar. For our 480 similar gesture-pairs, the distance graphs of path-distance as a function of angle (Figure 6a). e, within 0.2% (0.4) of the returned by GSS was, on averag Therefore, a first improvement over the brute force optimal, while the angle returned was within 0.5°. approach would be hill climbing: rotate C by ±1° for as Furthermore, although GSS loses |10.0–7.2|=2.8 iterations decreases. For our sample of long as C ’s path-distance to T i to hill climbing for similar gestures, it gains 480 pairs, we found that hill climbing always found the |10.0–53.5|=43.5 iterations for dissimilar ones. Thus, in a global minimum, requiring 7.2 ( =5.0) rotations on SD recognizer with 10 templates for each of 16 gesture types average. The optimal angle was, on average, just 4.2° (5.0°) (160 templates), GSS would require 160×10=1600 away from the indicative angl e at 0°, indicating that the ate, compared to 7.2×10 + iterations to recognize a candid indicative angle was indeed a good approximation of 53.5×150=8097 iterations for hill climbing—an 80.2% angular alignment for similar gestures. (That said, there savings. (Incidentally, brute force would require were a few matches found up to ±44° away.) The path- -A - ISTANCE -B EST T 160×360=57,600 iterations.) The D distance after just rotating th e indicative angle to 0° was NGLE function in Appendix A implements GSS. A only 10.9% (13.0) higher than optimal. However, although hill climbing is efficient for similar Limitations of the $1 Recognizer gestures, it is not efficient for dissimilar ones. In a second Simple techniques have limitations, and the $1 recognizer is stratified sample of 480 dissimilar gesture-pairs, we found zer is a geometric template no exception. The $1 recogni s an average of 63.6° ( that the optimal angle wa =50.8°) SD ndidate strokes are compared matcher, which means that ca e at 0°. This required an away from the indicative angl to previously stored templates, and the result produced is average of 53.5 (45.7) rotations using hill climbing. The the closest match in 2-D Euclidean space. To facilitate average path-distance after just rotating the indicative angle pairwise point comparisons, the default $1 algorithm is to 0° was 15.8% (14.7) higher than optimal. Moreover, of rotation, scale, and position invariant. While this provides the 480 dissimilar pairs, 52 of them, or 10.8%, had local tolerance to gesture variation, it means that $1 cannot minima in their path-distance graphs (Figure 6b), which distinguish gestures whose identities depend on specific means that hill climbing might not succeed. However, local orientations, aspect ratios, or locations. For example, minima alone are not concerning, since suboptimal scores separating squares from rectangles, circles from ovals, or our chances of getting decrease for dissimilar gestures only up-arrows from down-arrows is not possible without unwanted matches. The issue of greater concern is the high modifying the algorithm. Furthermore, horizontal and number of iterations, especially with many templates. non-uniform scaling; if 1-D vertical lines are abused by gestures are to be recognized, candidates can be tested to their bounding box exceeds a see if the minor dimension of 2 By “similar,” we mean gestures subjects intended to be the same. minimum. If it does not, the candidate (e.g., line) can be

6 Apparatus scaled uniformly so that its major dimension matches the Using an HP iPAQ h4355 Pock et PC with a 2.25"×3.00" reference square. Finally, $1 does not use time, so gestures screen, we presented the gestures shown in Figure 1 in cannot be differentiated on the basis of speed. Prototypers random order to subjects. The gestures were based on those wishing to differentiate gestures on these bases will need to used in other user interface sy stems [8,12,13,27]. Subjects understand and modify the $1 algorithm. For example, if used a pen-sized plastic stylus measuring 6.00" in length to desired, the candidate can be scale invariance is not C enter gestures on the device. Our Pocket PC application before template T resized to match each unscaled i (Figure 8) logged all gestures in a simple XML format and C comparison. Or if rotation invariance is unwanted, T i y , containing ( x ) points with millisecond timestamps. can be compared without rotati ng the indicative angle to 0°. Importantly, such treatments can be made on a per gesture ) basis. T ( i Accommodating gesture variability is a key property of any recognizer. Feature-based reco gnizers, like Rubine [23], can capture properties of a gesture that matter for recognition if the features are properly chosen. Knowledgeable users can add or remove features to distinguish troublesome gestures, but because of the difficulty in choosing good features, it is usually necessary to define a gesture class by its summary statistics over a set of examples. In Rubine’s case, this has the undesirable consequence that there is no guarantee that even the Figure 8. The Pocket PC application used to capture gestures made by subjects. The right image shows the reminder displayed training examples themselves will be correctly recognized when subjects began the fast speed for the “delete_mark” gesture. if they are entered as candidates. Such unpredictable behavior may be a serious limitation for $1’s audience. Procedure: Capturing Gestures In contrast, to handle variation in $1, prototypers or For each of the 16 gesture types from Figure 1, subjects application end-users can define new templates that capture entered one practice gesture before beginning three sets of the variation they desire by using a single name . For 10 entries at slow, medium, and fast speeds. Messages were all be recognized as “arrow” example, different arrows can presented between each block of slow, medium, and fast with just a few templates bearing that name (Figure 7). This gestures to remind subjects of the speed they should use. aliasing is a direct means of handling variation among For slow gestures, they were asked to “be as accurate as gestures in a way that users can understand. If a user finds possible.” For medium gestures, they were asked to that a new arrow he makes is not recognized, he can simply r fast gestures, they were “balance speed and accuracy.” Fo add that arrow as a new template of type “arrow” and it will asked to “go as fast as they can.” After entering Of course, the success of this be recognized from then on. 16×3×10=480 gestures, subjects were given a chance to rate approach depends on what other templates are loaded. them on a 1-5 scale (1=disliked a lot, 5=liked a lot). Procedure: Recognizer Testing to two popular recognizers We compared our $1 recognizer previously used in HCI. The Rubine classifier [23] has been used widely (e.g., [8,13,14,17]). It relies on training examples from which it extracts and weights features to gdt perform statistical matching. Our version includes the [8,14] routines for improving Rubine on small training sets. Figure 7. Defining multiple instances of “arrow” allows variability in We also tested a template matcher based on Dynamic Time the way candidate arrows can be made and matched. Note that Warping (DTW) [18,28]. Like $1, DTW does not extract orientation is not an issue, since $1 is rotation invariant. features from training examples but matches point-paths. Unlike $1, however, DTW relies on dynamic programming, EVALUATION which gives it considerable flexibility in how two point our $1 recognizer to more To compare the performance of sequences may be aligned. complex recognizers used in HCI, we conducted an evaluation using 4800 gestures collected from 10 subjects. We extended Rubine and DTW to use $1’s rotation invariance scheme. Also, the gestures for Rubine and DTW Method re size and translated to the were scaled to a standard squa Subjects origin. They were not resample d, since these techniques do Ten subjects were recruited. Five were students. Eight were not use pairwise point comparisons. Rubine was properly ees in science, engineering, female. Three had technical degr trained these adjustments to gestures were made. after SD or computing. The av erage age was 26.1 ( =6.4).

7 Figure 9. (a) (b) Recognition error rates as a function of Recognition error rates as a function of templates or training (lower is better). (c) articulation speeds (lower is better). ] for each position along the Normalized gesture scores [0..1 N -best list at 9 training examples. The testing procedure we followed was based on those used fferences were statistically 7.17% (10.60) errors. These di 2 for testing in machine learning [15] (pp. 145-150). Of a =867.33, p<.0001). $1 and DTW were significant ( χ (2,N=780) 2 given subject’s 16×10=160 gestures made at a given speed, =668.43, significantly more accurate than Rubine ( χ (1,N=780) the number of training examples for each of the 16 E p<.0001), but $1 and DTW were not significantly different 2 =1 to 9 gesture types was increased systematically from E χ from one another ( =0.13, n.s.). (1,N=780) for $1 and DTW, and E =2 to 9 for Rubine (Rubine fails on E =1). In a process repeated 100 times per level of E , E Effect of Number of Templates / Training Examples training examples were chosen randomly for each gesture The number of templates or training examples had a 2 E untrained gestures in each category. Of the remaining 10– =125.24, χ significant effect on recognition errors ( (1,N=780) category, one was picked at random and tested as the p<.0001). As shown in Figure 9a, $1 and DTW improved candidate. Over the 100 test s, incorrect outcomes were slightly as the number of te mplates increased, from 2.73% averaged into a recognition erro r rate for each gesture type =2.38) and 2.14% (1.76) errors with 1 template to SD ( for that subject at that speed. 0.45% (0.64) and 0.54% (0.84) errors with 9 templates, respectively. Rubine’s improvement was more pronounced, For a given subject at a given speed, there were from 16.03% (5.98) errors with 2 training examples to 9×16×100=14,400 recognition tests for $1 and DTW, and 4.70% (3.03) errors with 9 training examples. However, this 8×16×100=12,800 tests for Rubine. These 41,600 tests × difference only produced a marginal recognizer training were done at 3 speeds, for 124,800 total tests per subject. 2 =4.80, p=0.09). χ interaction ( (2,N=780) Thus, with 10 subjects, the experiment consisted of 1,248,000 recognition tests. The results of every test were Effect of Gesture Articulation Speed -best lists. N logged, including the entire Subjects’ average speeds for slow, medium, and fast =567), 1153 (356), and 668 (212) gestures were 1761 ( SD Design and Analysis milliseconds. Speed had a significant effect on errors The experiment was a 3-factor within-subjects repeated 2 =24.56, p<.0001), with slow, medium, and fast χ ( (2,N=780) and measures design, with nominal factors for recognizer gestures being recognized with 2.84% (4.07), 2.46% (4.09), number of , and a continuous factor for articulation speed tively (Figure 9b). All three and 3.22% (4.44) errors, respec mean . The outcome measure was training examples × recognizer speed recognizers were affected similarly, so a . Since errors were rare, the data were recognition errors 2 =4.52, n.s.). χ interaction was not significant ( skewed toward zero and violated ANOVA’s normality (4,N=780) assumption, even under usual transformations. However, Scores Along the N-Best List Poisson regression [30] was well-suited to these data and three recogniz ers produce an In recognizing a candidate, all was therefore used. The overall model was significant 2 N -best list with scores at each position. (The result of the =3300.21, p<.0001). χ ( (22,N=780) list.) An examination of the recognition is the head of this Results falloff that occurs as we move down the N -best list gives us Recognition Performance a sense of the relative competitiveness of results as they vie $1 and DTW were very accura te overall, with 0.98% for the top position. We prefer a rapid and steady falloff, SD =3.63) and 0.85% (3.27) recognition errors, ( especially from position 1 to 2, indicating a good separation respectively. (Equivalently, recognition rates were 99.02% of scores. Such a falloff makes it easier to set a non- and 99.15%, respectively.) Rubine was less successful, with recognition threshold and improve recognition robustness.

8 Subjective Millseconds NumPts $1.00 DTW Rubine Table 1. Results for individual Gesture (error % with 9 training examples) (1-5) Fast Slow Medium Fast Medium Slow gestures: times (ms), number 1268 76 61 0* 0* 3.70 3.0 arrow 768 1876 90 of points, recognition error 452 0.33 43 59 70 0* 931 1394 caret 4.0 2.87 rates (%), and subjective 49* 37* 0.97 0.93 3.97 682* 1028* 4.1 check 393 58* ratings (1=dislike a lot, 5=like 0.40 3.13 0.40 50 70 4.0 496 936 1624 circle 91 616 84 71 55 0* 0* 0.33* 3.2 delete_mark 1614 1089 a lot). For times, number of 2.20 1.67 2.0 1259 896 81 63 left_curly_brace 1779 2.10 70 † † † points, and error rates, 62 1.17 0* 0* 51 3.2 74 678 1092 1591 left_square_bracket minimum values in each 52 0* 0* 2.83 pigtail 1441 949 540 87 72 4.4* column are marked with (*); 1269 837 523 70 60 question_mark 1.37 1.83 6.40 2.7 48 maximum values are marked 70 0* 0* 12.87 rectangle 2497 1666 3.0 117 916 96 with ( ). For subjective ratings, † 1065 0.33 † 81 right_curly_brace 66 0.50 5.47 2.1 2060 1429 73 3.4 52 75 0* right_square_bracket 1599 616 62 0* 5.90 1044 the best is marked with (*); 3375 110 139 75 2081 0* † 3.7 0.40 0* star † † 998 † † † the worst is marked with ( ). 14.80 706 99 78 58 1.03 0.73 † triangle 3.6 2041 1288 For readability, extra zeroes 377* 0.83 v 1143 727 4.1 65 53 38 1.70 6.40 are omitted for error rates that 0.40 0.23 55 73 91 640 1162 1837 x 3.7 2.80 are exactly 0%. 1152.5 54.6 0.45 1760.5 Mean 0.54 4.70 3.4 667.5 70.9 85.8 20.1 0.7 0.75 0.55 10.7 15.2 4.06 211.6 356.3 567.3 StdDev Discussion -best falloff for all three N Figure 9c shows the normalized From our experiment, it is clear that $1 performs very well recognizers using 9 templates or training examples. The for user interface gestures, reco gnizing them at more than first and last results are defined as scores 1.0 and 0.0, ormed almost identically, 99% accuracy overall. DTW perf respectively. We can see that $1 falls off the fastest, DTW but with much longer processing times. Both algorithms did second, and Rubine third. Note that $1 shows the greatest well even with only 1 loaded template, performing above falloff from position 1 to position 2. 97% accuracy. With only 3 loaded templates, both 99.5% of the accuracy they algorithms function at about Recognizer Execution Speed exhibit at 9 templates. This means that designers and ceably slower than the other We found that DTW runs noti application end-users can define gestures using only a few techniques. On average, DTW took a whopping 128.26 examples and still expect reliable recognition. Although ( =60.02) minutes to run the 14,400 tests for a given SD DTW’s flexibility gave it an edge over $1 with few subject’s 160 gestures made at a given speed. In contrast, templates, with 9 templates, that same flexibility causes $1 only took 1.59 (0.04) minutes, while Rubine took 2.38 DTW to falter while $1 takes a small lead. This finding (0.60) minutes. This difference in speed is explained by the resonates with Kristensson and Zhai’s decision to abandon fact that DTW’s runtime is quadratic in the number of elastic matching due to unwanted flexibility [11]. samples. Thus slowly-made gestures are much slower to recognize. As noted, there are procedures to accelerate Another interesting finding is that $1 performs well even DTW [24], but these make it a more complicated algorithm, without using Golden Section Search. $1’s overall error rate which runs counter to our motivation for this work. after only rotating the indicative angle to 0° was 1.21% (3.88), just 1.21–0.98=0.23% higher than using GSS to Differences Among Gestures and Subjective Ratings search for the optimal angular alignment. Table 1 shows results for individual gestures. Here we can see that “check” and “v” were fast gestures at all speeds, At its best, Rubine performed at about 95% accuracy using and that “star” and “right_curly_brace” were slow. The 9 training examples for each of the 16 gesture types. This “check” had the fewest points at all speeds, while the “star” result is comparable to the result reported by Rubine had the most. With 9 templates or training examples loaded acy on a set of 15 gesture himself, who showed 93.5% accur DTW had perfect recognition for each gesture type, $1 and types with 10 training examples per type [23]. Our result rates for 7 and 8 of 16 gestures, respectively, while Rubine may be better due to our use of rotation invariance. Of eft_curly_brace” gesture was had none. Recognizing the “l course, Rubine would improve with more training examples the most difficult for $1 and DTW, while for Rubine it was that capture more gesture variability. d best on “delete_mark” and the “triangle”. Rubine performe Although gesture articulation speed significantly affected “star”. errors, this was most evident for Rubine. It is interesting Qualitative results show that subjects liked “pigtail”, that the medium speed resulted in the best recognition rates “check”, and “v”, all fairly quick gestures. They disliked for all three recognizers. Th is may be because at slow the curly braces and “questio n_mark”. Subjects’ comments speeds, subjects were less fluid, and their gestures were gestures included, “They were as to why they liked certain made too tentatively; at fast speeds, their gestures were easiest to control,” and “They were all one fluid motion.” sloppier. At medium speeds, however, subjects’ gestures Comments on disliked gestures included, “The curly braces were neither overly tentative nor overly sloppy, resulting in made me feel clumsy,” and “Ges tures with straight lines or higher recognition rates. Subjective feedback resonates with 90° angles were difficult to make, especially slowly.” this, where fluid gestures were preferred.

9 REFERENCES The falloff during $1’s -best list is a positive feature of the N 1. Anderson, D., Bailey, C. and Skubic, M. (2004) Hidden algorithm, since scores are better differentiated. DTW is Markov Model symbol recognition for sketch-based interfaces. owed a clear disadvantage in nearly the same, but Rubine sh Menlo Park, CA: AAAI Press, 15-21. AAAI Fall Symposium. this regard. (2003) VisionWand: Interaction 2. Cao, X. and Balakrishnan, R. techniques for large displays using a passive wand tracked in Recognizers, Recorders, and Gesture Data Set 3D. Proc. UIST '03. New York: ACM Press, 173-182. To facilitate the recording and testing of gestures, we (2005) Evaluation of an on-line 3. Cao, X. and Balakrishnan, R. implemented $1, DTW, and Rubine in C#. Each uses an adaptive gesture interface with command prediction. Proc. identical XML gesture format, which is also the format Graphics Interface '05. Waterloo, Ontario: CHCCS, 187-194. sture recognition algorithm and 4. Cho, M.G. (2006) A new ge written by our Pocket PC recorder (Figure 8). In addition, segmentation method of Korean scripts for gesture-allowed we implemented a JavaScript version of $1 for use on the 3 (9), 1290-1303. ink editor. Information Sciences 176 This version recognizes quite well, even with only 1 web. and Winograd, T. (2001) Fluid 5. Guimbretière, F., Stone, M. template defined. When it does err, the misrecognized Proc. interaction with high-resolution wall-size displays. gesture can be immediately added as a new template, UIST '01. New York: ACM Press, 21-30. increasing recognition rates ther eafter. In addition to these 6. Henry, T.R., Hudson, S. E. and Newell, G.L. (1990) implementations, we have made our XML gesture set Integrating gesture and snapping into a user interface toolkit. s for download and testing. available to other researcher Proc. UIST '90. New York: ACM Press, 112-122. bretiere, F., Baudisch, P. and 7. Hinckley, K., Ramos, G., Guim FUTURE WORK Smith, M. (2004) Stitching: Pen gestures that span multiple Proc. AVI '04. New York: ACM Press, 23-31. displays. Although we demonstrate the strengths of a simple $1 8. Hong, J.I. and Landay, J.A. (2000) SATIN: A toolkit for recognizer, we have not yet validated its programming ease Proc. UIST '00. New York: informal ink-based applications. for novice programmers. A future study could give different ACM Press, 63-72. recognition algorithms to user interface prototypers to see 9. Kara, L.B. and Stahovich, T.F. (2004) An image-based which are easiest to build, debug, and comprehend. Given AAAI trainable symbol recognizer for sketch-based interfaces. the simplicity of $1, we would expect it to fare quite well. Menlo Park, CA: AAAI Press, 99-105. Fall Symposium. B. and SanGiovanni, J. (2005) 10. Karlson, A.K., Bederson, B. An interactive extension would be to allow users to correct o designs for one-handed AppLens and LaunchTile: Tw -best list, and then N a failed recognition result using the thumb use on small devices. New York: ACM Proc. CHI '05. have their articulated gesture morph some percentage of the Press, 201-210. way toward the selected template until it would have been 11. Kristensson, P. and Zhai , S. (2004) SHARK2: A large nd of interactive correction successfully recognized. This ki vocabulary shorthand writing system for pen-based and animation might aid gesture learning and retention. Proc. UIST '04. computers. New York: ACM Press, 43-52. 12. Landay, J. and Myers, B.A. (1993) Extending an existing Further empirical analysis may help justify some user interface toolkit to support gesture recognition. Adjunct algorithmic choices. For exampl e, we currently compute the Proc. CHI '93. New York: ACM Press, 91-92. point in the indicative angle from the centroid to the first 13. Lin, J., Newman, M.W., Hong, J.I. and Landay, J.A. (2000) gesture, but the first point in a stroke is probably not the DENIM: Finding a tighter fit between tools and practice for web New York: ACM Press, 510-517. Proc. CHI '00. site design. most reliable. Is there another point that would generate 14. Long, A.C., Landay, J. A. and Rowe, L.A. (1999) more consistent estimates of the best angular alignment? New Proc. CHI '99. Implications for a gesture design tool. York: ACM Press, 40-47. CONCLUSION New York: Machine Learning. 15. Mitchell, T.M. (1997) We have presented a simple $1 recognizer that is easy, McGraw-Hill. cheap, and usable almost anywhere. Despite its simplicity, 16. Morris, M.R., Huang, A., Paepcke, A. and Winograd, T. it provides optional rotation, scale, and position invariance, (2006) Cooperative gestures: Multi-user gestural interactions and offers 99+% accuracy with only a few loaded for co-located groupware. Proc. CHI '06. New York: ACM templates. It requires no complex mathematical procedures, Press, 1201-1210. yet competes with appro aches that use dynamic 17. Myers, B.A., McDaniel, R.G., Miller, R.C., Ferrency, A.S., Faulring, A., Kyle, B.D., Mick ish, A., Klimovitski, A. and fication. It also employs a programming and statistical classi Doane, P. (1997) The Amulet environment: New models for rotation invariance scheme that is applicable to other effective user interface software development. IEEE Trans. algorithms like DTW and Rubine. Although $1 has known Software Engineering 23 (6), 347-365. limitations, it is our hope that this work will support the 18. Myers, C.S. and Rabiner, L.R. (1981) A comparative study of incorporation of gestures into mobile, tablet, large display, several dynamic time-warping algorithms for connected word and tabletop systems, particularly by user interface recognition. (7), 1389-1409. The Bell System Technical J. 60 prototypers who may have previously felt gesture 19. Notowidigdo, M. and Miller, R.C. (2004) Off-line sketch recognition was beyond their reach. AAAI Fall Symposium. interpretation. Menlo Park, CA: AAAI Press, 120-126. 20. Pittman, J.A. (1991) Rec ognizing handwritten text. Proc. CHI '91. New York: ACM Press, 271-275. 3 http://faculty.washington.e du/wobbrock/proj/dollar/ 21. Plamondon, R. and Srihari, S. N. (2000) On-line and off-line

10 handwriting recognition: A comprehensive survey. IEEE so that the resulting bounding box will be of points Scale Step 3. 2 - OUNDING points dimension; then translate to the origin. B size Trans. Pattern Analysis & Machine Int. 22 (1), 63-84. max , min max ), ( min ). , returns a rectangle according to ( OX B Vetterling, W.T. and Flannery, 22. Press, W.H., Teukolsky, S.A., x x y y For gestures serving as templates, Steps 1-3 should be carried out Numerical Recipes in C. B.P. (1992) Cambridge Univ. Press. candidates, Steps 1-4 should be once on the raw input points. For 23. Rubine, D. (1991) Specifying gestures by example. Proc. used just after the candidate is articulated. SIGGRAPH '91. New York: ACM Press, 329-337. ( ) CALE -T O QUARE -S points size , S 24. Salvador, S. and Chan, P. ( 2004) FastDTW: Toward accurate rd OUNDING ) points ( OX -B B 1 ← B Wkshp. on 3 dynamic time warping in linear time and space. in 2 foreach points p point do Mining Temporal and Sequential Data, ACM KDD '04. 3 q p ← B ) / size × ( Seattle, Washington (August 22-25, 2004). x width x q 4 size / B ← p ) × ( 25. Sezgin, T.M. and Davis, R. (2005) HMM-based efficient sketch y height y q newPoints , ) PPEND ( 5 A recognition. Proc. IUI '05. New York: ACM Press, 281-283. newPoints return 6 , M., Nayak, A. and Zunic, J. (2006) Measuring ć 26. Stojmenovi T -O RANSLATE -T O RIGIN ( points ) linearity of a finite set of points. Los Alamitos, Proc. CIS '06. ) ( ENTROID points 1 C ← c CA: IEEE Press, 1-6. 2 in points do p point foreach 27. Swigart, S. (2005) Easily wr ite custom gesture recognizers for q 3 p – c ← your Tablet PC applications. Tablet PC Technical Articles. x x x 4 q ← c – p 28. Tappert, C.C. (1982) Cursive script recognition by elastic y y y PPEND ( newPoints , q ) 5 A (6), 765-771. IBM J. of Research & Development 26 matching. 6 newPoints return 29. Tappert, C.C., Suen, C.Y. and Wakahara, T. (1990) The state IEEE Trans. of the art in online handwriting recognition. Match points Step 4. templates . The size variable against a set of (8), 787-808. Pattern Analysis & Machine Int. 12 CALE - O -T passed to S size refers to the ECOGNIZE on line 7 of R Log-linear Models for Event Histories. 30. Vermunt, J.K. (1997) S equals ½(-1 + 5). We use √ QUARE in Step 3. The symbol φ ge Publications. Thousand Oaks, CA: Sa ECOGNIZE . Due to using =2° on line 3 of R θ =±45° and θ ∆ 31. Wilson, A.D. and Shafer, S. (2003) XWand: UI for intelligent and -D A , we can assume that ESAMPLE ISTANCE ATH in P B R spaces. New York: ACM Press, 545-552. Proc. CHI '03. A |=| B |. contain the same number of points, i.e., | 32. Zhai, S. and Kristensson, P. (2003) Shorthand writing on stylus R points ( ECOGNIZE ) templates , New York: ACM Press, 97-104. Proc. CHI '03. keyboard. 1 ∞ b ← + T templates do template foreach 2 in APPENDIX A – $1 GESTURE RECOGNIZER ISTANCE -A T -B EST -A NGLE ( points , T , - θ , θ , θ ) D ← d 3 ∆ evenly spaced points. n path into points Resample a Step 1. if 4 d < then b R points ) n ( ESAMPLE , d ← b 5 P ← I 1 ATH ) / ( n points ( ENGTH -L – 1) 6 ← ′ T T 2 2 0 ← D 2 ) size ( √ / 0.5 b 1 – ← size + 7 score newPoints ← 3 points 0 〈 8 return 〉 score , ′ T do ≥ i for points 1 in foreach 4 point p i , θ , -A ) , T , θ points ( NGLE EST ISTANCE -A T -B θ D a ∆ b 5 d D ← ISTANCE ( ) p p , i i -1 ← φ ) θ + (1 – θ φ 1 x 1 a b then I 6 D + d ≥ if ) ( 2 f ) -A ISTANCE D ← x , T , points ( NGLE -A T 1 1 D 7 ← p – p + (( p ) × ( d ) / ) – I q i -1 i i -1 x x x x φ ) θ x + φ θ 3 ← (1 – b 2 a 8 q ) × ( ) + (( I – D ← d p p p – ) / i i -1 -1 i y 4 f T T ← D ISTANCE ) -A -A NGLE ( points , x , 2 2 y y y – θ | > do θ | θ while 5 newPoints ( ) q , PPEND 9 A a ∆ b f 6 if < f then points , i , q ) // q will be the next p ( NSERT 10 I 2 1 i x ← 7 θ 0 ← D 11 2 b 8 x x ← 12 ← D + d D else 2 1 ← f 9 f newPoints return 13 1 2 ENGTH ) ( -L A ATH P θ + (1 – φ ) θ ← φ x 10 b a 1 d 1 ← 0 NGLE D x , ISTANCE -A T -A ← T ( points , ) 11 f 1 1 from i | for 2 | A step 1 1 do to 12 else + D 3 ← d d A A ) , ( ISTANCE x ← θ 13 -1 i i a 1 d return 4 x 14 ← x 2 1 f ← 15 f 1 2 Step 2. Rotate points so that their indicative angle is at 0°. + ← θ (1 – φ ) θ φ 16 x a 2 b points -T ( -Z ERO ) O OTATE R 17 f ) D x , T , points ( NGLE -A T -A ISTANCE ← 2 2 ENTROID ̄) y ̄, x ) // computes ( points ( 1 c ← C f ) f , ( IN 18 return M 1 2 points – c , θ points – c ( TAN ≤ π ) // for - π ≤ 2 θ ← A 0 x 0 y x y D θ , T points ( NGLE -A T -A ISTANCE ) , 3 newPoints R ← OTATE ( points , - θ ) -B Y , points ( Y ) θ -B OTATE ← R newPoints 1 newPoints return 4 ← d P 2 ATH ( ) T , newPoints ISTANCE -D points OTATE -B Y ( points , θ ) R 3 d return c ← C 1 ENTROID points ) ( ATH ( A , B ) P ISTANCE -D 2 do points in p point foreach ← d 1 0 q 3 IN θ ← c + ( ) S p c – – p – ( θ OS ) C c x x y x x y 2 for i do 1 step | A | to 0 from OS θ + c IN θ ← + ( p – c ) S ( p – c ) C 4 q y x y y x y ( ) B , A ISTANCE d ← d + D 3 i i ( PPEND newPoints , q ) 5 A | / | return d 4 A 6 return newPoints

Related documents

CityNT2019TentRoll 1

CityNT2019TentRoll 1

STATE OF NEW YORK 2 0 1 9 T E N T A T I V E A S S E S S M E N T R O L L PAGE 1 VALUATION DATE-JUL 01, 2018 COUNTY - Niagara T A X A B L E SECTION OF THE ROLL - 1 CITY - North Tonawanda TAX MAP NUMBER ...

More info »
G:\COMP\PHSA\PHSA.bel

G:\COMP\PHSA\PHSA.bel

G:\COMP\PHSA\PHSA-MERGED.XML PUBLIC HEALTH SERVICE ACT [As Amended Through P.L. 115–408, Enacted December 31, 2018] References in brackets ¿ ø¿ ø are to title 42, United States Code TITLE I—SHORT TITL...

More info »
MDS 3.0 RAI Manual v1.16 October 2018

MDS 3.0 RAI Manual v1.16 October 2018

Centers for Medicare & Medicaid Services Long-Term Care Facility Resident Assessment Instrument 3.0 User’s Manual Version 1.16 October 2018

More info »
Fourth National Report on Human Exposure to Environmental Chemicals Update

Fourth National Report on Human Exposure to Environmental Chemicals Update

201 8 Fourth National Report on Human Exposure to Environmental Chemicals U pdated Tables, March 2018 , Volume One

More info »
tsa4

tsa4

i i “tsa4_trimmed” — 2017/12/8 — 15:01 — page 1 — #1 i i Springer Texts in Statistics Robert H. Shumway David S. Sto er Time Series Analysis and Its Applications With R Examples Fourth Edition i i i ...

More info »
JO 7400.11C   Airspace Designations and Reporting Points

JO 7400.11C Airspace Designations and Reporting Points

U.S. DEPARTMENT OF TRANSPORTATION ORDER FEDERAL AVIATION ADMINISTRATION 7400.11C JO Air Traffic Organization Policy August 13, 2018 SUBJ: Airspace Designations and Reporting Points . This O rder, publ...

More info »
6510020586

6510020586

Experion MX CD Controls R701.1 User Manual 6510020586 Rev 01

More info »
ayout 1

ayout 1

0465039146-FM:FM 12/5/06 12:25 AM Page i C O D E

More info »
UNSCEAR 2008 Report Vol.I

UNSCEAR 2008 Report Vol.I

This publication contains: VOLUME I: SOURCES SOURCES AND EFFECTS Report of the United Nations Scientific Committee on the Effects of Atomic Radiation to the General Assembly OF IONIZING RADIATION Scie...

More info »
RIE Tenant List By Docket Number

RIE Tenant List By Docket Number

SCRIE TENANTS LIST ~ By Docket Number ~ Borough of Bronx SCRIE in the last year; it includes tenants that have a lease expiration date equal or who have received • This report displays information on ...

More info »
WEF GlobalInformationTechnology Report 2014

WEF GlobalInformationTechnology Report 2014

Insight Report The Global Information Technology Report 2014 Rewards and Risks of Big Data Beñat Bilbao-Osorio, Soumitra Dutta, and Bruno Lanvin, Editors

More info »
vol9 organic ligands

vol9 organic ligands

C HERMODYNAMICS HEMICAL T OMPOUNDS AND C OMPLEXES OF OF C U, Np, Pu, Am, Tc, Se, Ni and Zr O ELECTED WITH RGANIC L IGANDS S Wolfgang Hummel (Chairman) Laboratory for Waste Management Paul Scherrer Ins...

More info »
PS Sec 15

PS Sec 15

State of California Civil Service Pay Scale - Alpha by Class Title Class Schem Full Class Title Code Pay CBID NT Prob. Mo. WWG MCR Period AR Crit Footnotes Compensation SISA ACCOUNT CLERK II 1733 CU70...

More info »
2019 01 15 574 Findings Of Fact

2019 01 15 574 Findings Of Fact

Case 1:18-cv-02921-JMF Document 574 Filed 01/15/19 Page 1 of 277 UNITED STATES DISTRICT COURT SOUTHERN DISTRICT OF NEW YORK -------------------------------------------------------------------- -------...

More info »
sustainability path design guidelines

sustainability path design guidelines

First Last Mile Strategic Plan & PLANNING GUIDELINES Sounds good, I haven’t been to LACMA in a while...the Pathway? Hmm...I’ll check it out. See you soon! The Meet-Up! The Meet-Up! In sunny downtown L...

More info »
OCS Operations Field Directory

OCS Operations Field Directory

Gulf of Mexico OCS Region OCS Operations Field Directory (Includes all active and expired fields and leases) Quarterly Repor t, as of March 31 , 201 9 U.S. Department of the Interior Bureau of Ocean E...

More info »
MCO 1020.34H v2

MCO 1020.34H v2

Headquarters, U. S. MCO 1020.34 H 0000 Marine Corps PCN 1020015 MARINE CORPS UNIFORM REGULATIONS DISTRIBUTION STATEMENT A: Approved for public release; distribution is unlimited.

More info »
Monograph 13: Risks Associated with Smoking Cigarettes with Low Machine Measured Yields of Tar and Nicotine

Monograph 13: Risks Associated with Smoking Cigarettes with Low Machine Measured Yields of Tar and Nicotine

SMOKING AND MONOGRAPH 13 TOBACCO CONTROL Risks Associated with Smoking Cigarettes with Low Machine- Measured Yields of Tar and Nicotine U.S. DEPARTMENT OF HEALTH AND HUMAN SERVICES Public Health Servi...

More info »