.

Tuesday, March 12, 2019

Customer Satisfaction in E-Commerce

In Proceedings of the 17th IEE UK Teletraffic Symposium, Dublin, Ireland, May 16-18, 2001 QUANTIFYING CUSTOMER joy WITH E-COMMERCE WEBSITES Hubert Graja and Jennifer McManis1 Abstract E-commerce is an increasingly signifi idlert part of the global economy. Users of E-commerce net stations very much select high expectations for the calibre of service, and if those expectations argon not met, the close aim is only a click away. A number of military operation problems have been observed for E-commerce meshing web localizes, and much work has g iodin into characterising the performance of network servers and internet applications.However, the clients of E-commerce entanglement places be less headspring studied. In this work, we prove a way of assessing comfort for different client types with a meshing site according to divers(a) different disputations. Individual measures may be surmountd for simple comparison, and combined to give an overall mirth rating. This met hodological analysis is applied to three Irish E-Commerce meshwork sites. 1) Introduction The World gigantic sack is one of the most important lucre services, and has been adultly creditworthy for the phenomenal growth of the net profit in recent years.An increasingly hot and important sack-based activity is ECommerce, in which divers(a) types of financial transactions atomic number 18 carried turn out or facilitated using the tissue. It is widely expected that E-Commerce activity leave behind continue to grow and that it bequeath be a signifi cornerstonet fixings of the global economy in the near future. A number of performance problems in E-Commerce systems have been observed, mainly due to heavier-thananticipated loads and the consequent softness to satisfy node requirements. This has resulted in a lot of work attempting to stipulate the performance of sack servers and Internet applications e. . 1? 4. However the clients of these E-Commerce systems be less we ll studied. Some sights show considerable dis expiation with current E-Commerce and vane servers for example, it has been reported that as many as 60% of users typically sessnot ferret out the knowl run into they be looking for in a network site, even though the learning is present 5. In an area such(prenominal) as ECommerce, clients demand a high timbre of the service they receive, since it is easy to move away to around other site if they perceive the current one to be unsatisfactory. An important hold out in designing E-Commerce systems is to characterise the ustomers requirements for satisfactory service. Parameters which affect a customers blessedness with an E-Commerce system include the response time, number of clicks needed to find what they indispensability, essence of information they are required to give, and predictability of the service received. This leads to the idea of customer classification, where customers in the same class would encourage argumen tations in a similar fashion. node classification may be performed either based on how they approximate their happiness with an E-Commerce system, or on most other way (e. . large/medium/small budget type/speed of Internet connectedness the customer has to the server frequent/previous/new customer). Here we before long present a methodological analysis for measuring the satisfaction of customer classes. This methodological analysis is applied to a testing case consisting of three Irish E-Commerce nett sites in the telecommunications sector. We are able to demonstrate different levels of customer satisfaction among the tissue sites, and also different levels of satisfaction with various parameters for each single(a) nett site. 2) MethodologyIn our methodology, we identify customer classes reflecting roots of customers with different behavioural characteristics, and meshwork site parameters relating to features of the Web site which willing potentially affect customer s atisfaction. We and so seek to measure customer satisfaction with the various parameters in a consistent and quantifiable way. This methodology is sumd below a more circumstantial discussion of the methodology may be found in 6. 2. 1) guest Classification customers may be classified in various ways, such as their behaviour or according to how they measure satisfaction with a Web site.However this classification is made, a representation of the customer class must(prenominal) then be made. This representation has two components first, customer behaviour and second, customer satisfaction measures 1 surgical process Engineering Laboratory http//www. eeng. dcu. ie/picture element School of Electronic Engineering, Dublin City University, Dublin 9, Ireland emailprotected dcu. ie, emailprotected dcu. ie for various Web site parameters. We define customer behaviour in call of the interaction with the Web site. A trace behaviour is defined as the series of clicks and other informatio n that the customer exchanges with the site.Typically, behaviour for a customer class is defined as one or more traces. For a customer class, a burden may be associated with the traces indicating how likely it is for the customer to perform that particular trace behaviour. That is, some behaviour may be exhibited more frequently by a user in a class, and this behaviour should be devoted higher exercising weight. 2. 2) customer comfort Measures The factors which energy affect customer satisfaction with a Web site are contained in a parameter list.It is important that for each parameter in the list satisfaction should be quantifiable. Some quantification measures are tardily defined. For instance, if the parameter is the number of clicks, the quantification may be defined as an integer value. otherwise parameters may have more subjective quantifications. For instance, how does one quantify the quality of information available at a Web site? In order to compare the satisfaction heedful for different parameters, the quantifications must be mapped to a fixed eggshell. For instance, all measures could be mapped to a scale of 0 to 10.This mapping is what allows us to represent customer valuation of the same parameters. For instance, some customers will tolerate delay better than others. This may lead to one customer mapping a transfer time of 5 seconds to 10 and another mapping a download time of 5 seconds to 0. Studies such as 7 indicate that this mapping can be complex and background dependent. 2. 3) Analysis of client Satisfaction for a Web billet apply the above, for each trace it is possible to associate a satisfaction value with every parameter.The trace weight units may then be used to pose at a leaden number of the satisfaction values associated with the parameters. This gives a measure of how satisfied a accustomed class of customers is with a given parameter. Finally a weighting of parameters can be defined, allowing for an overall satisfa ction measure of a class for the Web site. By varying this weighting, we can study how different parameters affect customer satisfaction. 3) Test Results The most rocky part of this exercise is in relating customer trace behaviour to the satisfaction vector. How parameter satisfaction is measured nd how it is mapped onto a fixed scale must be addressed on a case-by-case basis, although implement using the methodology may lead to the definition of some mensuration cases. excessively, since multiple executions of the same trace may lead to different values, some statistical analysis may be required. We have applied our methodology to three Irish E-Commerce Web sites in the telecommunications sector (designated here as Web sites A, B, and C). 3. 1) Customer Classification Customers for the three Web sites we examined have been split up into two distinct classes mystic and line of descent. canvass are associated with searching for specific information that the customers might be interested in. Six customer tasks are set in Table 1 and for each Web site a trace is devised to perform the task. For the sake of convenience, we call all traces associated with a given task by the same name, even though the trace is plainly specific to the Web site. selective information services is split into T4a and T4b because Web site B provided different pages depending on whether the customer was private or business.Trace T1 T2 T3 T4a T4b T5 T6 labour Where to bargain a phone Coverage Tariffs WAP Data Services Data Services for telephone line Roaming List Business Tariffs Table 1 Tasks The esoteric and Business customer classes are defined as a arrangement of the above tasks, and an associated weighting is given which is indicative of the relative likelihood of customers of a given class seeking to perform that task. Trace weightings for the chthoniccover and Business classes are given in Table 2. The interpretation is that for a group of close users roughly half m ight want to know where to buy a phone, 30% might want to know about tariffs, 10% ight want to know about coverage and 10% might want to know about WAP services. The Business users exhibit different behaviour with 30% wanting to know about coverage, 30% being interested in the roaming list, 20% being interested in data services and 20% being interested in business tariffs. Customer Class sequestered Trace T1 T2 T3 T4a T2 T4a, T4b T5 T6 Trace burthen 0. 5 0. 1 0. 3 0. 1 0. 3 0. 2 0. 3 0. 2 Business Table 2 Trace weightings for different customer classes 3. 2) Satisfaction Measures Three parameters were identified complexness, Time, and reference. complexity was measured as the number of clicks to reach the destination. Time was measures as total download time in seconds. fictional character was a subjective measure of the quality of the information contained in the site (could the information be found, and how easy was it to find? ). Quality was measured using a small-scale user survey where the users were asked to examine the end page for each task and rate their satisfaction with the information they found there on a scale of 0-100%. A scale of 0-10 (with 0 being worst and 10 best) was chosen for a uniform comparison of satisfaction values.The measured satisfaction values were mapped onto the 0-10 scale as follows Complexity Time Quality 10(20-(n-1)/10), where n is the number of clicks 10(10-t/60), where t is the trace download time in seconds x/10, where x is the average value of user satisfaction with the quality of the page For Quality a straightforward linear mapping was applied. More complex mappings were employed for Complexity and Time, and are shown in solve 1. Examining the Time mapping we see that 60 seconds is regarded as an unacceptable download time, and even 30 seconds leads to a sanely poor rating.Similarly, for Complexity, 10 clicks is regarded as unacceptable, and even 5 clicks is fairly poor. air that we have chosen one among many po ssible mappings. It is up to the tester to decide how to choose a mapping that best reflects customer preferences. Also note that, in this case, all customers use the same mappings, and thus are seen to perceive the parameters in a similar fashion. It is an easy extension to affix different scale mappings to different customer classes or to different traces. Figure 1 mapping time and complexity measures to a 0-10 scale 3. ) Satisfaction Measurement for Web sends Once the satisfaction measures are determined, it remains to test the Web sites and compare results. Data was gathered using the Web Performance Trainer 2. 1 tool 8 to execute each of the traces on the Web site in question. This was necessary solely to take time data, and was carried out on a weekday. The other two satisfaction values can be determined by an examination of the Web sites. Tables 3, 4, and 5 summarise the satisfaction measures for the three Web sites respectively. Web Site A Customer Class Trace Complexity unrefinedSatisfaction Measures Time raw 37. 6 34. 0 34. 7 28. 6 34. 7 46. 9 28. 6 38. 7 scale 2. 4 2. 7 2. 6 3. 3 2. 6 2. 6 1. 7 3. 3 2. 3 2. 4 Quality raw 80 72 67 68 61 69 66 64 leprose 8. 0 7. 2 6. 7 6. 8 7. 5 6. 1 6. 9 6. 6 6. 4 6. 5 lepidote 4. 1 3. 0 4. 1 4. 1 3. 8 4. 1 3. 0 4. 1 4. 1 3. 8 Private Business T1 T3 T2 T4a weighted avg. T2 T5 T4a T6 weighted avg. 4 5 4 4 4 5 4 4 Table 3 Customer Satisfaction for Web Site A Web Site B Customer Class Trace Complexity raw scaled 4. 1 7. 4 5. 5 5. 5 5. 4 5. 5 4. 1 4. 1 7. 4 5. 2 Satisfaction Measures Time raw 16. 7 11. 2 17. 1 13. 9 17. 1 14. 39. 7 12. 3 scaled 5. 3 6. 5 5. 2 5. 9 5. 7 5. 2 5. 7 2. 2 6. 2 4. 9 Quality scaled 8. 6 7. 6 7. 6 7. 4 8. 1 7. 3 7. 5 6. 4 7. 6 7. 2 raw 86 76 76 74 73 75 64 76 Private Business T1 T3 T2 T4a weighted avg. T2 T5 T4b T6 weighted avg. 4 2 3 3 3 4 4 2 Table 4 Customer Satisfaction for Web Site B Web Site C Customer Class Trace Satisfaction Measures Complexity Time raw scaled 4. 1 5. 5 7. 4 5. 5 5. 0 7. 4 7. 4 5. 5 7. 4 7. 0 raw 14. 0 13. 0 11. 1 12. 4 11. 1 10. 2 12. 4 10. 9 scaled 5. 8 6. 1 6. 5 6. 2 6. 0 6. 5 6. 8 6. 2 6. 6 6. 5 Quality scaled 8. 1 6. 8 6. 8 5. 8 7. 4 6. 1 5. 3 6. 5. 3 5. 7 raw 81 68 68 58 61 53 60 53 Private Business T1 T3 T2 T4a weighted avg. T2 T5 T4a T6 weighted avg. 4 3 2 3 2 2 3 2 Table 5 Customer Satisfaction for Web Site C The overall satisfaction measures are summarised in Table 6. Some interesting conclusions can be displace from these measures. Firstly, for all Web sites and all parameters, there was a variation in satisfaction levels between the customer classes. Thus, not all users find the Web sites equally good. This is most noticeable for the Quality parameter Private users rated Quality higher than Business users in all cases.If Business customers are considered valuable, this open up is not desirable. There is also a large difference in satisfaction ratings for the Time parameter of Web site B, again favouring Private customers over B usiness customers. Secondly, for all users and all measures, there are a range of values across the Web sites. For instance, the Time satisfaction for Business users varies from 6. 5 for Web site C down to 2. 4 for Web site A. This indicates that Web site C might have an edge in attracting Business customers. Finally, for a given user class and Web site, different satisfaction levels are observed.For example, Private users of Web site A have a Time satisfaction value of 2. 6 and a Quality satisfaction value of 7. 5. The exact interpretation of this is difficult, since the different parameter satisfaction values are dependent on the mapping of the raw data, which of necessity, differs for each parameter. However, it does perhaps indicate a favouring of form over efficiency. Customer Class Satisfaction Customer Web Site Class Web site A Private Web site B Web site C Web site A Business Web site B Web site C Satisfaction Measures Complexity Time Quality 3. 8 5. 4 5. 0 3. 8 5. 2 7. 0 2. 6 5. 7 6. 2. 4 4. 9 6. 5 7. 5 8. 1 7. 4 6. 5 7. 2 5. 7 Table 6 Customer Class Satisfaction for Web sites A, B, and C Finally, an overall judgment of customer satisfaction may be found by weighting the various parameters. Table 7 displays the overall satisfaction results under several(prenominal) different weighting schemes Weighting 1 gives all parameters equal weighting Weighting 2 gives Time and Complexity equal weighting and Quality zero weighting Weighting 3 considers Time only (zero weighting for Quality and Complexity). These weightings reflect possible values the tester places on the various parameters.We can see that for all the weightings, Business users have a fool order of preference, ranking Web site C highest, then Web site B, and finally Web site A. The order of preference for Private users varies according to the weighting used, although Web site A is worst under all three weightings. Customer Class Satisfaction Customer Web Site Class Web site A Private Web site B Web site C Web site A Business Web site B Web site C Satisfaction Measures Weighting 1 Weighting 2 Weighting 3 4. 6 6. 4 6. 1 4. 2 5. 8 6. 4 3. 2 5. 6 5. 5 3. 1 5. 1 6. 8 2. 6 5. 7 6. 0 2. 4 4. 9 6. 5 Table 7 Customer satisfaction with a Web site ) Conclusions Modelling customer satisfaction with Web and E-commerce sites is not as well studied as Web server modelling, but determining whether and how the customers of these sites are satisfied with their interactions is beseeming increasingly important as the Web matures. We have proposed a methodology for estimating how satisfied defined classes of customers are with a Web site. Our approach recognises that customer satisfaction is a complex issue and includes factors which are not easily measured. We have applied our methodology to the study of three Irish E-Commerce Web sites.These sites were chosen for representative purposes only and the results do not necessarily generalize to other Web sites. Choices for the tester include n ot only what customer categories and what Web site parameters to examine, but also how to interpret the measured data such as download time. The flexibility of the methodology means that it will be necessary for the tester to mete outfully consider all of their options. The next step is to investigate whether generic wine categories of users can be defined, and/or whether they care about generic Web site parameters (e. . it seems download time will always be a factor in user satisfaction). presumptuousness a specific Web site, we will explore methods for mapping these generic user types and satisfaction parameters into the sites content. If an analysis of the resulting satisfaction measures shows that there is a contrariety in the satisfaction of different user types, we will study how the Web site designer or administrator should take this into account, and whether their reaction can be determined dynamically while the user is interacting with the site.References 1. 2. 3. 4. 5. 6. 7. 8. Nakamura et al, ENMA the WWW legion Performance Measurement System via Packet Monitoring, INET99. Cottrell et al, Tutorial on Internet Monitoring and PingER at SLAC available from http//www. slac. stanford. edu/comp/net/wan-mon/tutorial. html Kalidindi and Zekauskas, Surveyor An Infrastructure for Internet Performance Measurements, INET99. Hava and Murphy, Performance Measurement of World Wide Web Servers Proc. f 16th UK Teletraffic Symposium, May 2000. http//www. ecai. ie/usability_online. htm Graja and McManis, Modelling User Interactions with E-Commerce Services, to be presented at ICN01, Colmar, France, July 2001. Bouch, Kuchinsky, and Bhatti, Quality is in the Eye of the Beholder Meeting Users Requirements for Internet Quality of Service, HP technical report HPL-2000-4, http//www. hpl. hp. com/techreports/2000/HPL-2000-4. html Web Performance Incorporated, http//www. Webperfcenter. com

No comments:

Post a Comment