Performance Comparison of USB 2.0 vs. SATA II

Analysis of Non-Corresponding Measurements

 

Christopher L Griffis

Embry-Riddle Aeronautical University

 

Project Summary

 

Experimental Design

 

Analysis of Data

 

The Automation Software

 

The Spreadsheet

 

The Report

 

 

Analysis of Data

 

Figure 5: USB 2.0 vs. SATA II Analysis Summary, Trial1, gives a summary showing the final outcome of the experimental results after analysis. Next, Figure 6: Paired Analysis for 256 MB Transfer, Trial1 and Figure 7: Paired Analysis for 80 MB Transfer, Trial1 present the analysis of the means of the differences along with the confidence intervals for each alternative and the difference of the means. Observe that none of the confidence intervals include zero, suggesting that the value calculated for difference of the means is a statistically significant value.

 

Figure 8: ANOVA with F-test Results for 256 MB Transfer, Trial1 and Figure 9: ANOVA with F-test Results for 80 MB Transfer, Trial1 present the results of the ANOVA analysis, and Figure 10: Summary Comparison of File Size to MTR, Trial1 provides a visual representation of the curious relationship showing an apparent increase in data transfer speed for a larger file size.

 

 

Figure 5: USB 2.0 vs. SATA II Analysis Summary, Trial1

 

 

 

Figure 6: Paired Analysis for 256 MB Transfer, Trial1

 

Figure 7: Paired Analysis for 80 MB Transfer, Trial1

 

 

 

Figure 8: ANOVA with F-test Results for 256 MB Transfer, Trial1

 

 

 

Figure 9: ANOVA with F-test Results for 80 MB Transfer, Trial1

 

 

 

Figure 10: Summary Comparison of File Size to MTR, Trial1

 

 

In both configurations (transfer of a 256 MB file and transfer of an 80 MB file), SATA II was determined to perform better than USB 2.0, confirming the experimental hypothesis. When repeatedly transferring a 256 MB file across the USB 2.0 (from this point forward abbreviated as “USB”) connection, the mean transfer time was 17232 ms; when repeatedly transferring a 256 MB file across the SATA II (from this point forward abbreviated as “SATA”) connection, the mean transfer time was 10352 ms. The difference in these times shows that the USB connection takes 6879±46 ms longer than the USB connection; the variation in the measured differences is 0.6% of the difference of the means. These indices were derived from measurements of 190 transfer replications, assuring with 99.9% confidence that the measured means are within 3% of the true means. Consequently, for a file transfer size of 256 MB, the SATA connection shows a 66.45% relative improvement to the USB connection, operating at a mean transfer rate (MTR) of 24.73 MB/s, as compared to USB’s 14.86 MB/s MTR.

 

In order to rule out the possibility that the above results are simply an unexpected consequence of the file size, the same experiment, measurements, analysis, and interpretation were carried out while using a file of size 80 MB. In this configuration, again SATA is the better performer. When repeatedly transferring an 80 MB file across the USB connection, the mean transfer time was 6173 ms; when repeatedly transferring an 80 MB file across the SATA II connection, the mean transfer time was 4076 ms. The difference in these times shows that the USB connection takes 2097±7 ms longer than the USB connection; the variation in the measured differences is 0.23% of the difference of the means. These indices were derived from measurements of 190 transfer replications, assuring with 99.9% confidence that the measured means are within 3% of the true means. Consequently, for a file transfer size of 80 MB, the SATA connection shows a 51.45% relative improvement to the USB connection, operating at a mean transfer rate (MTR) of 19.62 MB/s, as compared to USB’s 12.96 MB/s MTR.

 

One would expect that the transfer rates for each connection would be independent of the file size. Instead, for both the SATA and the USB connection, there is a significant disparity between the observed transfer rates for the 80 MB file size versus the 256 MB file size. This is not believed to indicate an actual trend, but is instead attributed to an unavoidable systematic error introduced as a consequence of the experimental design.

 

After performing the experiment, the number of trials needed to assure 99.9% confidence that true means are within 3% was recalculated using the new standard deviations and means for each configuration. As shown in Figure 6 and Figure 7, these numbers are far below the actual number of replications performed. Moreover, three more complete runs of this experiment  had similar results. This is believed to serve as a testament to the reproducibility of the experimental results and the quality of the experimental procedure. Additionally, it can be observed that the coefficients of variation for the observations are very low, showing a low scattering of the data. Moreover, while possibly redundant, the ANOVA test confirmed that the ratio of the variation due to the alternatives to the variation due to errors far exceeds the F-ratio value, proving that the differences are statistically significant.

 

Table 6 enumerates a list of possible sources of error for this experiment, providing an explanation for each.

 

Table 6: Possible Sources of Error with Explanations

Source of Error

Explanation

The automation software: the FileWriter constructors are called between timestamps

Each time the copyFile() method is called, a delay is introduced that is not part of the time the file transfer activity is in progress. This delay, due to the call to the FileWriter constructor, slightly inflates the time difference between when the file transfer began and ended relative to the actual file transfer time.

Quantization error of the time stamps

The time stamps are taken at millisecond resolution. Should the actual file transfer complete in between millisecond tics, it is not possible to know whether the tick value will be rounded up or down when providing the time stamp. However, since the standard deviations aren’t less than 46ms for any of the experiments, this can be ignored.

Probe effect due to the time stamps

The process of actually calling the time stamp request takes time. When the second time stamp is called, the delay between the call and the return of the time stamp is inserted as part of the time difference for transferring the file. These delays however, are assumed to be so small as to not make a big enough impact to be considered.

Background activity of the operating system

The JRE runs on top of the WinXP OS. Any background activity that happens to preempt the JRE while a transfer is in progress will inflate the time differences to create outlier measurements.

 

 

 

Project Summary

 

Experimental Design

 

Analysis of Data

 

The Automation Software

 

The Spreadsheet

 

The Report