diff options
author | Yasir Khan <yasir_khan@mentor.com> | 2014-08-25 23:38:31 +0500 |
---|---|---|
committer | Martin Jansa <Martin.Jansa@gmail.com> | 2014-08-28 19:55:38 +0200 |
commit | f6e6d632dbe23fe69c3363d9d70622cc69b25df1 (patch) | |
tree | e5d1467036ac3f8280c6d26654e5c1a8206badfc /meta-oe/licenses | |
parent | 59a7c659e8d59e3caa5aeddf1ba45e8704174730 (diff) | |
download | meta-openembedded-f6e6d632dbe23fe69c3363d9d70622cc69b25df1.tar.gz |
lmbench: add lmbench-exception LICENSE
Signed-off-by: Christopher Larson <chris_larson@mentor.com>
Signed-off-by: Yasir-Khan <yasir_khan@mentor.com>
Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Diffstat (limited to 'meta-oe/licenses')
-rw-r--r-- | meta-oe/licenses/GPL-2.0-with-lmbench-restriction | 108 |
1 files changed, 108 insertions, 0 deletions
diff --git a/meta-oe/licenses/GPL-2.0-with-lmbench-restriction b/meta-oe/licenses/GPL-2.0-with-lmbench-restriction new file mode 100644 index 000000000..3e1f7cc6d --- /dev/null +++ b/meta-oe/licenses/GPL-2.0-with-lmbench-restriction | |||
@@ -0,0 +1,108 @@ | |||
1 | %M% %I% %E% | ||
2 | |||
3 | The set of programs and documentation known as "lmbench" are distributed | ||
4 | under the Free Software Foundation's General Public License with the | ||
5 | following additional restrictions (which override any conflicting | ||
6 | restrictions in the GPL): | ||
7 | |||
8 | 1. You may not distribute results in any public forum, in any publication, | ||
9 | or in any other way if you have modified the benchmarks. | ||
10 | |||
11 | 2. You may not distribute the results for a fee of any kind. This includes | ||
12 | web sites which generate revenue from advertising. | ||
13 | |||
14 | If you have modifications or enhancements that you wish included in | ||
15 | future versions, please mail those to me, Larry McVoy, at lm@bitmover.com. | ||
16 | |||
17 | ========================================================================= | ||
18 | |||
19 | Rationale for the publication restrictions: | ||
20 | |||
21 | In summary: | ||
22 | |||
23 | a) LMbench is designed to measure enough of an OS that if you do well in | ||
24 | all catagories, you've covered latency and bandwidth in networking, | ||
25 | disks, file systems, VM systems, and memory systems. | ||
26 | b) Multiple times in the past people have wanted to report partial results. | ||
27 | Without exception, they were doing so to show a skewed view of whatever | ||
28 | it was they were measuring (for example, one OS fit small processes into | ||
29 | segments and used the segment register to switch them, getting good | ||
30 | results, but did not want to report large process context switches | ||
31 | because those didn't look as good). | ||
32 | c) We insist that if you formally report LMbench results, you have to | ||
33 | report all of them and make the raw results file easily available. | ||
34 | Reporting all of them means in that same publication, a pointer | ||
35 | does not count. Formally, in this context, means in a paper, | ||
36 | on a web site, etc., but does not mean the exchange of results | ||
37 | between OS developers who are tuning a particular subsystem. | ||
38 | |||
39 | We have a lot of history with benchmarking and feel strongly that there | ||
40 | is little to be gained and a lot to be lost if we allowed the results | ||
41 | to be published in isolation, without the complete story being told. | ||
42 | |||
43 | There has been a lot of discussion about this, with people not liking this | ||
44 | restriction, more or less on the freedom principle as far as I can tell. | ||
45 | We're not swayed by that, our position is that we are doing the right | ||
46 | thing for the OS community and will stick to our guns on this one. | ||
47 | |||
48 | It would be a different matter if there were 3 other competing | ||
49 | benchmarking systems out there that did what LMbench does and didn't have | ||
50 | the same reporting rules. There aren't and as long as that is the case, | ||
51 | I see no reason to change my mind and lots of reasons not to do so. I'm | ||
52 | sorry if I'm a pain in the ass on this topic, but I'm doing the right | ||
53 | thing for you and the sooner people realize that the sooner we can get on | ||
54 | to real work. | ||
55 | |||
56 | Operating system design is a largely an art of balancing tradeoffs. | ||
57 | In many cases improving one part of the system has negative effects | ||
58 | on other parts of the system. The art is choosing which parts to | ||
59 | optimize and which to not optimize. Just like in computer architecture, | ||
60 | you can optimize the common instructions (RISC) or the uncommon | ||
61 | instructions (CISC), but in either case there is usually a cost to | ||
62 | pay (in RISC uncommon instructions are more expensive than common | ||
63 | instructions, and in CISC common instructions are more expensive | ||
64 | than required). The art lies in knowing which operations are | ||
65 | important and optmizing those while minimizing the impact on the | ||
66 | rest of the system. | ||
67 | |||
68 | Since lmbench gives a good overview of many important system features, | ||
69 | users may see the performance of the system as a whole, and can | ||
70 | see where tradeoffs may have been made. This is the driving force | ||
71 | behind the publication restriction: any idiot can optimize certain | ||
72 | subsystems while completely destroying overall system performance. | ||
73 | If said idiot publishes *only* the numbers relating to the optimized | ||
74 | subsystem, then the costs of the optimization are hidden and readers | ||
75 | will mistakenly believe that the optimization is a good idea. By | ||
76 | including the publication restriction readers would be able to | ||
77 | detect that the optimization improved the subsystem performance | ||
78 | while damaging the rest of the system performance and would be able | ||
79 | to make an informed decision as to the merits of the optimization. | ||
80 | |||
81 | Note that these restrictions only apply to *publications*. We | ||
82 | intend and encourage lmbench's use during design, development, | ||
83 | and tweaking of systems and applications. If you are tuning the | ||
84 | linux or BSD TCP stack, then by all means, use the networking | ||
85 | benchmarks to evaluate the performance effects of various | ||
86 | modifications; Swap results with other developers; use the | ||
87 | networking numbers in isolation. The restrictions only kick | ||
88 | in when you go to *publish* the results. If you sped up the | ||
89 | TCP stack by a factor of 2 and want to publish a paper with the | ||
90 | various tweaks or algorithms used to accomplish this goal, then | ||
91 | you can publish the networking numbers to show the improvement. | ||
92 | However, the paper *must* also include the rest of the standard | ||
93 | lmbench numbers to show how your tweaks may (or may not) have | ||
94 | impacted the rest of the system. The full set of numbers may | ||
95 | be included in an appendix, but they *must* be included in the | ||
96 | paper. | ||
97 | |||
98 | This helps protect the community from adopting flawed technologies | ||
99 | based on incomplete data. It also helps protect the community from | ||
100 | misleading marketing which tries to sell systems based on partial | ||
101 | (skewed) lmbench performance results. | ||
102 | |||
103 | We have seen many cases in the past where partial or misleading | ||
104 | benchmark results have caused great harm to the community, and | ||
105 | we want to ensure that our benchmark is not used to perpetrate | ||
106 | further harm and support false or misleading claims. | ||
107 | |||
108 | |||