summaryrefslogtreecommitdiffstats
path: root/meta-oe/licenses
diff options
context:
space:
mode:
authorYasir Khan <yasir_khan@mentor.com>2014-08-25 23:38:31 +0500
committerMartin Jansa <Martin.Jansa@gmail.com>2014-08-28 19:55:38 +0200
commitf6e6d632dbe23fe69c3363d9d70622cc69b25df1 (patch)
treee5d1467036ac3f8280c6d26654e5c1a8206badfc /meta-oe/licenses
parent59a7c659e8d59e3caa5aeddf1ba45e8704174730 (diff)
downloadmeta-openembedded-f6e6d632dbe23fe69c3363d9d70622cc69b25df1.tar.gz
lmbench: add lmbench-exception LICENSE
Signed-off-by: Christopher Larson <chris_larson@mentor.com> Signed-off-by: Yasir-Khan <yasir_khan@mentor.com> Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Diffstat (limited to 'meta-oe/licenses')
-rw-r--r--meta-oe/licenses/GPL-2.0-with-lmbench-restriction108
1 files changed, 108 insertions, 0 deletions
diff --git a/meta-oe/licenses/GPL-2.0-with-lmbench-restriction b/meta-oe/licenses/GPL-2.0-with-lmbench-restriction
new file mode 100644
index 000000000..3e1f7cc6d
--- /dev/null
+++ b/meta-oe/licenses/GPL-2.0-with-lmbench-restriction
@@ -0,0 +1,108 @@
1%M% %I% %E%
2
3The set of programs and documentation known as "lmbench" are distributed
4under the Free Software Foundation's General Public License with the
5following additional restrictions (which override any conflicting
6restrictions in the GPL):
7
81. You may not distribute results in any public forum, in any publication,
9 or in any other way if you have modified the benchmarks.
10
112. You may not distribute the results for a fee of any kind. This includes
12 web sites which generate revenue from advertising.
13
14If you have modifications or enhancements that you wish included in
15future versions, please mail those to me, Larry McVoy, at lm@bitmover.com.
16
17=========================================================================
18
19Rationale for the publication restrictions:
20
21In summary:
22
23 a) LMbench is designed to measure enough of an OS that if you do well in
24 all catagories, you've covered latency and bandwidth in networking,
25 disks, file systems, VM systems, and memory systems.
26 b) Multiple times in the past people have wanted to report partial results.
27 Without exception, they were doing so to show a skewed view of whatever
28 it was they were measuring (for example, one OS fit small processes into
29 segments and used the segment register to switch them, getting good
30 results, but did not want to report large process context switches
31 because those didn't look as good).
32 c) We insist that if you formally report LMbench results, you have to
33 report all of them and make the raw results file easily available.
34 Reporting all of them means in that same publication, a pointer
35 does not count. Formally, in this context, means in a paper,
36 on a web site, etc., but does not mean the exchange of results
37 between OS developers who are tuning a particular subsystem.
38
39We have a lot of history with benchmarking and feel strongly that there
40is little to be gained and a lot to be lost if we allowed the results
41to be published in isolation, without the complete story being told.
42
43There has been a lot of discussion about this, with people not liking this
44restriction, more or less on the freedom principle as far as I can tell.
45We're not swayed by that, our position is that we are doing the right
46thing for the OS community and will stick to our guns on this one.
47
48It would be a different matter if there were 3 other competing
49benchmarking systems out there that did what LMbench does and didn't have
50the same reporting rules. There aren't and as long as that is the case,
51I see no reason to change my mind and lots of reasons not to do so. I'm
52sorry if I'm a pain in the ass on this topic, but I'm doing the right
53thing for you and the sooner people realize that the sooner we can get on
54to real work.
55
56Operating system design is a largely an art of balancing tradeoffs.
57In many cases improving one part of the system has negative effects
58on other parts of the system. The art is choosing which parts to
59optimize and which to not optimize. Just like in computer architecture,
60you can optimize the common instructions (RISC) or the uncommon
61instructions (CISC), but in either case there is usually a cost to
62pay (in RISC uncommon instructions are more expensive than common
63instructions, and in CISC common instructions are more expensive
64than required). The art lies in knowing which operations are
65important and optmizing those while minimizing the impact on the
66rest of the system.
67
68Since lmbench gives a good overview of many important system features,
69users may see the performance of the system as a whole, and can
70see where tradeoffs may have been made. This is the driving force
71behind the publication restriction: any idiot can optimize certain
72subsystems while completely destroying overall system performance.
73If said idiot publishes *only* the numbers relating to the optimized
74subsystem, then the costs of the optimization are hidden and readers
75will mistakenly believe that the optimization is a good idea. By
76including the publication restriction readers would be able to
77detect that the optimization improved the subsystem performance
78while damaging the rest of the system performance and would be able
79to make an informed decision as to the merits of the optimization.
80
81Note that these restrictions only apply to *publications*. We
82intend and encourage lmbench's use during design, development,
83and tweaking of systems and applications. If you are tuning the
84linux or BSD TCP stack, then by all means, use the networking
85benchmarks to evaluate the performance effects of various
86modifications; Swap results with other developers; use the
87networking numbers in isolation. The restrictions only kick
88in when you go to *publish* the results. If you sped up the
89TCP stack by a factor of 2 and want to publish a paper with the
90various tweaks or algorithms used to accomplish this goal, then
91you can publish the networking numbers to show the improvement.
92However, the paper *must* also include the rest of the standard
93lmbench numbers to show how your tweaks may (or may not) have
94impacted the rest of the system. The full set of numbers may
95be included in an appendix, but they *must* be included in the
96paper.
97
98This helps protect the community from adopting flawed technologies
99based on incomplete data. It also helps protect the community from
100misleading marketing which tries to sell systems based on partial
101(skewed) lmbench performance results.
102
103We have seen many cases in the past where partial or misleading
104benchmark results have caused great harm to the community, and
105we want to ensure that our benchmark is not used to perpetrate
106further harm and support false or misleading claims.
107
108