summaryrefslogtreecommitdiffstats
path: root/documentation/profile-manual/usage.rst
diff options
context:
space:
mode:
Diffstat (limited to 'documentation/profile-manual/usage.rst')
-rw-r--r--documentation/profile-manual/usage.rst1058
1 files changed, 541 insertions, 517 deletions
diff --git a/documentation/profile-manual/usage.rst b/documentation/profile-manual/usage.rst
index b401cf9040..17be149580 100644
--- a/documentation/profile-manual/usage.rst
+++ b/documentation/profile-manual/usage.rst
@@ -13,11 +13,11 @@ tools.
13perf 13perf
14==== 14====
15 15
16The 'perf' tool is the profiling and tracing tool that comes bundled 16The perf tool is the profiling and tracing tool that comes bundled
17with the Linux kernel. 17with the Linux kernel.
18 18
19Don't let the fact that it's part of the kernel fool you into thinking 19Don't let the fact that it's part of the kernel fool you into thinking
20that it's only for tracing and profiling the kernel - you can indeed use 20that it's only for tracing and profiling the kernel --- you can indeed use
21it to trace and profile just the kernel, but you can also use it to 21it to trace and profile just the kernel, but you can also use it to
22profile specific applications separately (with or without kernel 22profile specific applications separately (with or without kernel
23context), and you can also use it to trace and profile the kernel and 23context), and you can also use it to trace and profile the kernel and
@@ -26,43 +26,43 @@ of what's going on.
26 26
27In many ways, perf aims to be a superset of all the tracing and 27In many ways, perf aims to be a superset of all the tracing and
28profiling tools available in Linux today, including all the other tools 28profiling tools available in Linux today, including all the other tools
29covered in this HOWTO. The past couple of years have seen perf subsume a 29covered in this How-to. The past couple of years have seen perf subsume a
30lot of the functionality of those other tools and, at the same time, 30lot of the functionality of those other tools and, at the same time,
31those other tools have removed large portions of their previous 31those other tools have removed large portions of their previous
32functionality and replaced it with calls to the equivalent functionality 32functionality and replaced it with calls to the equivalent functionality
33now implemented by the perf subsystem. Extrapolation suggests that at 33now implemented by the perf subsystem. Extrapolation suggests that at
34some point those other tools will simply become completely redundant and 34some point those other tools will become completely redundant and
35go away; until then, we'll cover those other tools in these pages and in 35go away; until then, we'll cover those other tools in these pages and in
36many cases show how the same things can be accomplished in perf and the 36many cases show how the same things can be accomplished in perf and the
37other tools when it seems useful to do so. 37other tools when it seems useful to do so.
38 38
39The coverage below details some of the most common ways you'll likely 39The coverage below details some of the most common ways you'll likely
40want to apply the tool; full documentation can be found either within 40want to apply the tool; full documentation can be found either within
41the tool itself or in the man pages at 41the tool itself or in the manual pages at
42`perf(1) <https://linux.die.net/man/1/perf>`__. 42`perf(1) <https://linux.die.net/man/1/perf>`__.
43 43
44Perf Setup 44perf Setup
45---------- 45----------
46 46
47For this section, we'll assume you've already performed the basic setup 47For this section, we'll assume you've already performed the basic setup
48outlined in the ":ref:`profile-manual/intro:General Setup`" section. 48outlined in the ":ref:`profile-manual/intro:General Setup`" section.
49 49
50In particular, you'll get the most mileage out of perf if you profile an 50In particular, you'll get the most mileage out of perf if you profile an
51image built with the following in your ``local.conf`` file: :: 51image built with the following in your ``local.conf`` file::
52 52
53 INHIBIT_PACKAGE_STRIP = "1" 53 INHIBIT_PACKAGE_STRIP = "1"
54 54
55perf runs on the target system for the most part. You can archive 55perf runs on the target system for the most part. You can archive
56profile data and copy it to the host for analysis, but for the rest of 56profile data and copy it to the host for analysis, but for the rest of
57this document we assume you've ssh'ed to the host and will be running 57this document we assume you're connected to the host through SSH and will be
58the perf commands on the target. 58running the perf commands on the target.
59 59
60Basic Perf Usage 60Basic perf Usage
61---------------- 61----------------
62 62
63The perf tool is pretty much self-documenting. To remind yourself of the 63The perf tool is pretty much self-documenting. To remind yourself of the
64available commands, simply type 'perf', which will show you basic usage 64available commands, just type ``perf``, which will show you basic usage
65along with the available perf subcommands: :: 65along with the available perf subcommands::
66 66
67 root@crownbay:~# perf 67 root@crownbay:~# perf
68 68
@@ -97,26 +97,26 @@ along with the available perf subcommands: ::
97Using perf to do Basic Profiling 97Using perf to do Basic Profiling
98~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 98~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
99 99
100As a simple test case, we'll profile the 'wget' of a fairly large file, 100As a simple test case, we'll profile the ``wget`` of a fairly large file,
101which is a minimally interesting case because it has both file and 101which is a minimally interesting case because it has both file and
102network I/O aspects, and at least in the case of standard Yocto images, 102network I/O aspects, and at least in the case of standard Yocto images,
103it's implemented as part of busybox, so the methods we use to analyze it 103it's implemented as part of BusyBox, so the methods we use to analyze it
104can be used in a very similar way to the whole host of supported busybox 104can be used in a similar way to the whole host of supported BusyBox
105applets in Yocto. :: 105applets in Yocto::
106 106
107 root@crownbay:~# rm linux-2.6.19.2.tar.bz2; \ 107 root@crownbay:~# rm linux-2.6.19.2.tar.bz2; \
108 wget http://downloads.yoctoproject.org/mirror/sources/linux-2.6.19.2.tar.bz2 108 wget &YOCTO_DL_URL;/mirror/sources/linux-2.6.19.2.tar.bz2
109 109
110The quickest and easiest way to get some basic overall data about what's 110The quickest and easiest way to get some basic overall data about what's
111going on for a particular workload is to profile it using 'perf stat'. 111going on for a particular workload is to profile it using ``perf stat``.
112'perf stat' basically profiles using a few default counters and displays 112This command basically profiles using a few default counters and displays
113the summed counts at the end of the run: :: 113the summed counts at the end of the run::
114 114
115 root@crownbay:~# perf stat wget http://downloads.yoctoproject.org/mirror/sources/linux-2.6.19.2.tar.bz2 115 root@crownbay:~# perf stat wget &YOCTO_DL_URL;/mirror/sources/linux-2.6.19.2.tar.bz2
116 Connecting to downloads.yoctoproject.org (140.211.169.59:80) 116 Connecting to downloads.yoctoproject.org (140.211.169.59:80)
117 linux-2.6.19.2.tar.b 100% |***************************************************| 41727k 0:00:00 ETA 117 linux-2.6.19.2.tar.b 100% |***************************************************| 41727k 0:00:00 ETA
118 118
119 Performance counter stats for 'wget http://downloads.yoctoproject.org/mirror/sources/linux-2.6.19.2.tar.bz2': 119 Performance counter stats for 'wget &YOCTO_DL_URL;/mirror/sources/linux-2.6.19.2.tar.bz2':
120 120
121 4597.223902 task-clock # 0.077 CPUs utilized 121 4597.223902 task-clock # 0.077 CPUs utilized
122 23568 context-switches # 0.005 M/sec 122 23568 context-switches # 0.005 M/sec
@@ -131,21 +131,21 @@ the summed counts at the end of the run: ::
131 131
132 59.836627620 seconds time elapsed 132 59.836627620 seconds time elapsed
133 133
134Many times such a simple-minded test doesn't yield much of 134Such a simple-minded test doesn't always yield much of interest, but sometimes
135interest, but sometimes it does (see Real-world Yocto bug (slow 135it does (see the :yocto_bugs:`Slow write speed on live images with denzil
136loop-mounted write speed)). 136</show_bug.cgi?id=3049>` bug report).
137 137
138Also, note that 'perf stat' isn't restricted to a fixed set of counters 138Also, note that ``perf stat`` isn't restricted to a fixed set of counters
139- basically any event listed in the output of 'perf list' can be tallied 139--- basically any event listed in the output of ``perf list`` can be tallied
140by 'perf stat'. For example, suppose we wanted to see a summary of all 140by ``perf stat``. For example, suppose we wanted to see a summary of all
141the events related to kernel memory allocation/freeing along with cache 141the events related to kernel memory allocation/freeing along with cache
142hits and misses: :: 142hits and misses::
143 143
144 root@crownbay:~# perf stat -e kmem:* -e cache-references -e cache-misses wget http://downloads.yoctoproject.org/mirror/sources/linux-2.6.19.2.tar.bz2 144 root@crownbay:~# perf stat -e kmem:* -e cache-references -e cache-misses wget &YOCTO_DL_URL;/mirror/sources/linux-2.6.19.2.tar.bz2
145 Connecting to downloads.yoctoproject.org (140.211.169.59:80) 145 Connecting to downloads.yoctoproject.org (140.211.169.59:80)
146 linux-2.6.19.2.tar.b 100% |***************************************************| 41727k 0:00:00 ETA 146 linux-2.6.19.2.tar.b 100% |***************************************************| 41727k 0:00:00 ETA
147 147
148 Performance counter stats for 'wget http://downloads.yoctoproject.org/mirror/sources/linux-2.6.19.2.tar.bz2': 148 Performance counter stats for 'wget &YOCTO_DL_URL;/mirror/sources/linux-2.6.19.2.tar.bz2':
149 149
150 5566 kmem:kmalloc 150 5566 kmem:kmalloc
151 125517 kmem:kmem_cache_alloc 151 125517 kmem:kmem_cache_alloc
@@ -164,24 +164,24 @@ hits and misses: ::
164 164
165 44.831023415 seconds time elapsed 165 44.831023415 seconds time elapsed
166 166
167So 'perf stat' gives us a nice easy 167As you can see, ``perf stat`` gives us a nice easy
168way to get a quick overview of what might be happening for a set of 168way to get a quick overview of what might be happening for a set of
169events, but normally we'd need a little more detail in order to 169events, but normally we'd need a little more detail in order to
170understand what's going on in a way that we can act on in a useful way. 170understand what's going on in a way that we can act on in a useful way.
171 171
172To dive down into a next level of detail, we can use 'perf record'/'perf 172To dive down into a next level of detail, we can use ``perf record`` /
173report' which will collect profiling data and present it to use using an 173``perf report`` which will collect profiling data and present it to use using an
174interactive text-based UI (or simply as text if we specify --stdio to 174interactive text-based UI (or just as text if we specify ``--stdio`` to
175'perf report'). 175``perf report``).
176 176
177As our first attempt at profiling this workload, we'll simply run 'perf 177As our first attempt at profiling this workload, we'll just run ``perf
178record', handing it the workload we want to profile (everything after 178record``, handing it the workload we want to profile (everything after
179'perf record' and any perf options we hand it - here none - will be 179``perf record`` and any perf options we hand it --- here none, will be
180executed in a new shell). perf collects samples until the process exits 180executed in a new shell). perf collects samples until the process exits
181and records them in a file named 'perf.data' in the current working 181and records them in a file named ``perf.data`` in the current working
182directory. :: 182directory::
183 183
184 root@crownbay:~# perf record wget http://downloads.yoctoproject.org/mirror/sources/linux-2.6.19.2.tar.bz2 184 root@crownbay:~# perf record wget &YOCTO_DL_URL;/mirror/sources/linux-2.6.19.2.tar.bz2
185 185
186 Connecting to downloads.yoctoproject.org (140.211.169.59:80) 186 Connecting to downloads.yoctoproject.org (140.211.169.59:80)
187 linux-2.6.19.2.tar.b 100% |************************************************| 41727k 0:00:00 ETA 187 linux-2.6.19.2.tar.b 100% |************************************************| 41727k 0:00:00 ETA
@@ -189,37 +189,38 @@ directory. ::
189 [ perf record: Captured and wrote 0.176 MB perf.data (~7700 samples) ] 189 [ perf record: Captured and wrote 0.176 MB perf.data (~7700 samples) ]
190 190
191To see the results in a 191To see the results in a
192'text-based UI' (tui), simply run 'perf report', which will read the 192"text-based UI" (tui), just run ``perf report``, which will read the
193perf.data file in the current working directory and display the results 193perf.data file in the current working directory and display the results
194in an interactive UI: :: 194in an interactive UI::
195 195
196 root@crownbay:~# perf report 196 root@crownbay:~# perf report
197 197
198.. image:: figures/perf-wget-flat-stripped.png 198.. image:: figures/perf-wget-flat-stripped.png
199 :align: center 199 :align: center
200 :width: 70%
200 201
201The above screenshot displays a 'flat' profile, one entry for each 202The above screenshot displays a "flat" profile, one entry for each
202'bucket' corresponding to the functions that were profiled during the 203"bucket" corresponding to the functions that were profiled during the
203profiling run, ordered from the most popular to the least (perf has 204profiling run, ordered from the most popular to the least (perf has
204options to sort in various orders and keys as well as display entries 205options to sort in various orders and keys as well as display entries
205only above a certain threshold and so on - see the perf documentation 206only above a certain threshold and so on --- see the perf documentation
206for details). Note that this includes both userspace functions (entries 207for details). Note that this includes both user space functions (entries
207containing a [.]) and kernel functions accounted to the process (entries 208containing a ``[.]``) and kernel functions accounted to the process (entries
208containing a [k]). (perf has command-line modifiers that can be used to 209containing a ``[k]``). perf has command-line modifiers that can be used to
209restrict the profiling to kernel or userspace, among others). 210restrict the profiling to kernel or user space, among others.
210 211
211Notice also that the above report shows an entry for 'busybox', which is 212Notice also that the above report shows an entry for ``busybox``, which is
212the executable that implements 'wget' in Yocto, but that instead of a 213the executable that implements ``wget`` in Yocto, but that instead of a
213useful function name in that entry, it displays a not-so-friendly hex 214useful function name in that entry, it displays a not-so-friendly hex
214value instead. The steps below will show how to fix that problem. 215value instead. The steps below will show how to fix that problem.
215 216
216Before we do that, however, let's try running a different profile, one 217Before we do that, however, let's try running a different profile, one
217which shows something a little more interesting. The only difference 218which shows something a little more interesting. The only difference
218between the new profile and the previous one is that we'll add the -g 219between the new profile and the previous one is that we'll add the ``-g``
219option, which will record not just the address of a sampled function, 220option, which will record not just the address of a sampled function,
220but the entire callchain to the sampled function as well: :: 221but the entire call chain to the sampled function as well::
221 222
222 root@crownbay:~# perf record -g wget http://downloads.yoctoproject.org/mirror/sources/linux-2.6.19.2.tar.bz2 223 root@crownbay:~# perf record -g wget &YOCTO_DL_URL;/mirror/sources/linux-2.6.19.2.tar.bz2
223 Connecting to downloads.yoctoproject.org (140.211.169.59:80) 224 Connecting to downloads.yoctoproject.org (140.211.169.59:80)
224 linux-2.6.19.2.tar.b 100% |************************************************| 41727k 0:00:00 ETA 225 linux-2.6.19.2.tar.b 100% |************************************************| 41727k 0:00:00 ETA
225 [ perf record: Woken up 3 times to write data ] 226 [ perf record: Woken up 3 times to write data ]
@@ -230,45 +231,47 @@ but the entire callchain to the sampled function as well: ::
230 231
231.. image:: figures/perf-wget-g-copy-to-user-expanded-stripped.png 232.. image:: figures/perf-wget-g-copy-to-user-expanded-stripped.png
232 :align: center 233 :align: center
234 :width: 70%
233 235
234Using the callgraph view, we can actually see not only which functions 236Using the call graph view, we can actually see not only which functions
235took the most time, but we can also see a summary of how those functions 237took the most time, but we can also see a summary of how those functions
236were called and learn something about how the program interacts with the 238were called and learn something about how the program interacts with the
237kernel in the process. 239kernel in the process.
238 240
239Notice that each entry in the above screenshot now contains a '+' on the 241Notice that each entry in the above screenshot now contains a ``+`` on the
240left-hand side. This means that we can expand the entry and drill down 242left side. This means that we can expand the entry and drill down
241into the callchains that feed into that entry. Pressing 'enter' on any 243into the call chains that feed into that entry. Pressing ``Enter`` on any
242one of them will expand the callchain (you can also press 'E' to expand 244one of them will expand the call chain (you can also press ``E`` to expand
243them all at the same time or 'C' to collapse them all). 245them all at the same time or ``C`` to collapse them all).
244 246
245In the screenshot above, we've toggled the ``__copy_to_user_ll()`` entry 247In the screenshot above, we've toggled the ``__copy_to_user_ll()`` entry
246and several subnodes all the way down. This lets us see which callchains 248and several subnodes all the way down. This lets us see which call chains
247contributed to the profiled ``__copy_to_user_ll()`` function which 249contributed to the profiled ``__copy_to_user_ll()`` function which
248contributed 1.77% to the total profile. 250contributed 1.77% to the total profile.
249 251
250As a bit of background explanation for these callchains, think about 252As a bit of background explanation for these call chains, think about
251what happens at a high level when you run wget to get a file out on the 253what happens at a high level when you run ``wget`` to get a file out on the
252network. Basically what happens is that the data comes into the kernel 254network. Basically what happens is that the data comes into the kernel
253via the network connection (socket) and is passed to the userspace 255via the network connection (socket) and is passed to the user space
254program 'wget' (which is actually a part of busybox, but that's not 256program ``wget`` (which is actually a part of BusyBox, but that's not
255important for now), which takes the buffers the kernel passes to it and 257important for now), which takes the buffers the kernel passes to it and
256writes it to a disk file to save it. 258writes it to a disk file to save it.
257 259
258The part of this process that we're looking at in the above call stacks 260The part of this process that we're looking at in the above call stacks
259is the part where the kernel passes the data it's read from the socket 261is the part where the kernel passes the data it has read from the socket
260down to wget i.e. a copy-to-user. 262down to wget i.e. a ``copy-to-user``.
261 263
262Notice also that here there's also a case where the hex value is 264Notice also that here there's also a case where the hex value is
263displayed in the callstack, here in the expanded ``sys_clock_gettime()`` 265displayed in the call stack, here in the expanded ``sys_clock_gettime()``
264function. Later we'll see it resolve to a userspace function call in 266function. Later we'll see it resolve to a user space function call in
265busybox. 267BusyBox.
266 268
267.. image:: figures/perf-wget-g-copy-from-user-expanded-stripped.png 269.. image:: figures/perf-wget-g-copy-from-user-expanded-stripped.png
268 :align: center 270 :align: center
271 :width: 70%
269 272
270The above screenshot shows the other half of the journey for the data - 273The above screenshot shows the other half of the journey for the data ---
271from the wget program's userspace buffers to disk. To get the buffers to 274from the ``wget`` program's user space buffers to disk. To get the buffers to
272disk, the wget program issues a ``write(2)``, which does a ``copy-from-user`` to 275disk, the wget program issues a ``write(2)``, which does a ``copy-from-user`` to
273the kernel, which then takes care via some circuitous path (probably 276the kernel, which then takes care via some circuitous path (probably
274also present somewhere in the profile data), to get it safely to disk. 277also present somewhere in the profile data), to get it safely to disk.
@@ -277,112 +280,118 @@ Now that we've seen the basic layout of the profile data and the basics
277of how to extract useful information out of it, let's get back to the 280of how to extract useful information out of it, let's get back to the
278task at hand and see if we can get some basic idea about where the time 281task at hand and see if we can get some basic idea about where the time
279is spent in the program we're profiling, wget. Remember that wget is 282is spent in the program we're profiling, wget. Remember that wget is
280actually implemented as an applet in busybox, so while the process name 283actually implemented as an applet in BusyBox, so while the process name
281is 'wget', the executable we're actually interested in is busybox. So 284is ``wget``, the executable we're actually interested in is ``busybox``.
282let's expand the first entry containing busybox: 285Therefore, let's expand the first entry containing BusyBox:
283 286
284.. image:: figures/perf-wget-busybox-expanded-stripped.png 287.. image:: figures/perf-wget-busybox-expanded-stripped.png
285 :align: center 288 :align: center
289 :width: 70%
286 290
287Again, before we expanded we saw that the function was labeled with a 291Again, before we expanded we saw that the function was labeled with a
288hex value instead of a symbol as with most of the kernel entries. 292hex value instead of a symbol as with most of the kernel entries.
289Expanding the busybox entry doesn't make it any better. 293Expanding the BusyBox entry doesn't make it any better.
290 294
291The problem is that perf can't find the symbol information for the 295The problem is that perf can't find the symbol information for the
292busybox binary, which is actually stripped out by the Yocto build 296``busybox`` binary, which is actually stripped out by the Yocto build
293system. 297system.
294 298
295One way around that is to put the following in your ``local.conf`` file 299One way around that is to put the following in your ``local.conf`` file
296when you build the image: :: 300when you build the image::
297 301
298 INHIBIT_PACKAGE_STRIP = "1" 302 INHIBIT_PACKAGE_STRIP = "1"
299 303
300However, we already have an image with the binaries stripped, so 304However, we already have an image with the binaries stripped, so
301what can we do to get perf to resolve the symbols? Basically we need to 305what can we do to get perf to resolve the symbols? Basically we need to
302install the debuginfo for the busybox package. 306install the debugging information for the BusyBox package.
303 307
304To generate the debug info for the packages in the image, we can add 308To generate the debug info for the packages in the image, we can add
305``dbg-pkgs`` to :term:`EXTRA_IMAGE_FEATURES` in ``local.conf``. For example: :: 309``dbg-pkgs`` to :term:`EXTRA_IMAGE_FEATURES` in ``local.conf``. For example::
306 310
307 EXTRA_IMAGE_FEATURES = "debug-tweaks tools-profile dbg-pkgs" 311 EXTRA_IMAGE_FEATURES = "debug-tweaks tools-profile dbg-pkgs"
308 312
309Additionally, in order to generate the type of debuginfo that perf 313Additionally, in order to generate the type of debugging information that perf
310understands, we also need to set 314understands, we also need to set :term:`PACKAGE_DEBUG_SPLIT_STYLE`
311:term:`PACKAGE_DEBUG_SPLIT_STYLE` 315in the ``local.conf`` file::
312in the ``local.conf`` file: ::
313 316
314 PACKAGE_DEBUG_SPLIT_STYLE = 'debug-file-directory' 317 PACKAGE_DEBUG_SPLIT_STYLE = 'debug-file-directory'
315 318
316Once we've done that, we can install the 319Once we've done that, we can install the debugging information for BusyBox. The
317debuginfo for busybox. The debug packages once built can be found in 320debug packages once built can be found in ``build/tmp/deploy/rpm/*``
318``build/tmp/deploy/rpm/*`` on the host system. Find the busybox-dbg-...rpm 321on the host system. Find the ``busybox-dbg-...rpm`` file and copy it
319file and copy it to the target. For example: :: 322to the target. For example::
320 323
321 [trz@empanada core2]$ scp /home/trz/yocto/crownbay-tracing-dbg/build/tmp/deploy/rpm/core2_32/busybox-dbg-1.20.2-r2.core2_32.rpm root@192.168.1.31: 324 [trz@empanada core2]$ scp /home/trz/yocto/crownbay-tracing-dbg/build/tmp/deploy/rpm/core2_32/busybox-dbg-1.20.2-r2.core2_32.rpm root@192.168.1.31:
322 busybox-dbg-1.20.2-r2.core2_32.rpm 100% 1826KB 1.8MB/s 00:01 325 busybox-dbg-1.20.2-r2.core2_32.rpm 100% 1826KB 1.8MB/s 00:01
323 326
324Now install the debug rpm on the target: :: 327Now install the debug RPM on the target::
325 328
326 root@crownbay:~# rpm -i busybox-dbg-1.20.2-r2.core2_32.rpm 329 root@crownbay:~# rpm -i busybox-dbg-1.20.2-r2.core2_32.rpm
327 330
328Now that the debuginfo is installed, we see that the busybox entries now display 331Now that the debugging information is installed, we see that the BusyBox entries now display
329their functions symbolically: 332their functions symbolically:
330 333
331.. image:: figures/perf-wget-busybox-debuginfo.png 334.. image:: figures/perf-wget-busybox-debuginfo.png
332 :align: center 335 :align: center
336 :width: 70%
333 337
334If we expand one of the entries and press 'enter' on a leaf node, we're 338If we expand one of the entries and press ``Enter`` on a leaf node, we're
335presented with a menu of actions we can take to get more information 339presented with a menu of actions we can take to get more information
336related to that entry: 340related to that entry:
337 341
338.. image:: figures/perf-wget-busybox-dso-zoom-menu.png 342.. image:: figures/perf-wget-busybox-dso-zoom-menu.png
339 :align: center 343 :align: center
344 :width: 70%
340 345
341One of these actions allows us to show a view that displays a 346One of these actions allows us to show a view that displays a
342busybox-centric view of the profiled functions (in this case we've also 347busybox-centric view of the profiled functions (in this case we've also
343expanded all the nodes using the 'E' key): 348expanded all the nodes using the ``E`` key):
344 349
345.. image:: figures/perf-wget-busybox-dso-zoom.png 350.. image:: figures/perf-wget-busybox-dso-zoom.png
346 :align: center 351 :align: center
352 :width: 70%
347 353
348Finally, we can see that now that the busybox debuginfo is installed, 354Finally, we can see that now that the BusyBox debugging information is installed,
349the previously unresolved symbol in the ``sys_clock_gettime()`` entry 355the previously unresolved symbol in the ``sys_clock_gettime()`` entry
350mentioned previously is now resolved, and shows that the 356mentioned previously is now resolved, and shows that the
351sys_clock_gettime system call that was the source of 6.75% of the 357``sys_clock_gettime`` system call that was the source of 6.75% of the
352copy-to-user overhead was initiated by the ``handle_input()`` busybox 358``copy-to-user`` overhead was initiated by the ``handle_input()`` BusyBox
353function: 359function:
354 360
355.. image:: figures/perf-wget-g-copy-to-user-expanded-debuginfo.png 361.. image:: figures/perf-wget-g-copy-to-user-expanded-debuginfo.png
356 :align: center 362 :align: center
363 :width: 70%
357 364
358At the lowest level of detail, we can dive down to the assembly level 365At the lowest level of detail, we can dive down to the assembly level
359and see which instructions caused the most overhead in a function. 366and see which instructions caused the most overhead in a function.
360Pressing 'enter' on the 'udhcpc_main' function, we're again presented 367Pressing ``Enter`` on the ``udhcpc_main`` function, we're again presented
361with a menu: 368with a menu:
362 369
363.. image:: figures/perf-wget-busybox-annotate-menu.png 370.. image:: figures/perf-wget-busybox-annotate-menu.png
364 :align: center 371 :align: center
372 :width: 70%
365 373
366Selecting 'Annotate udhcpc_main', we get a detailed listing of 374Selecting ``Annotate udhcpc_main``, we get a detailed listing of
367percentages by instruction for the udhcpc_main function. From the 375percentages by instruction for the ``udhcpc_main`` function. From the
368display, we can see that over 50% of the time spent in this function is 376display, we can see that over 50% of the time spent in this function is
369taken up by a couple tests and the move of a constant (1) to a register: 377taken up by a couple tests and the move of a constant (1) to a register:
370 378
371.. image:: figures/perf-wget-busybox-annotate-udhcpc.png 379.. image:: figures/perf-wget-busybox-annotate-udhcpc.png
372 :align: center 380 :align: center
381 :width: 70%
373 382
374As a segue into tracing, let's try another profile using a different 383As a segue into tracing, let's try another profile using a different
375counter, something other than the default 'cycles'. 384counter, something other than the default ``cycles``.
376 385
377The tracing and profiling infrastructure in Linux has become unified in 386The tracing and profiling infrastructure in Linux has become unified in
378a way that allows us to use the same tool with a completely different 387a way that allows us to use the same tool with a completely different
379set of counters, not just the standard hardware counters that 388set of counters, not just the standard hardware counters that
380traditional tools have had to restrict themselves to (of course the 389traditional tools have had to restrict themselves to (the
381traditional tools can also make use of the expanded possibilities now 390traditional tools can now actually make use of the expanded possibilities now
382available to them, and in some cases have, as mentioned previously). 391available to them, and in some cases have, as mentioned previously).
383 392
384We can get a list of the available events that can be used to profile a 393We can get a list of the available events that can be used to profile a
385workload via 'perf list': :: 394workload via ``perf list``::
386 395
387 root@crownbay:~# perf list 396 root@crownbay:~# perf list
388 397
@@ -518,17 +527,17 @@ workload via 'perf list': ::
518.. admonition:: Tying it Together 527.. admonition:: Tying it Together
519 528
520 These are exactly the same set of events defined by the trace event 529 These are exactly the same set of events defined by the trace event
521 subsystem and exposed by ftrace/tracecmd/kernelshark as files in 530 subsystem and exposed by ftrace / trace-cmd / KernelShark as files in
522 /sys/kernel/debug/tracing/events, by SystemTap as 531 ``/sys/kernel/debug/tracing/events``, by SystemTap as
523 kernel.trace("tracepoint_name") and (partially) accessed by LTTng. 532 kernel.trace("tracepoint_name") and (partially) accessed by LTTng.
524 533
525Only a subset of these would be of interest to us when looking at this 534Only a subset of these would be of interest to us when looking at this
526workload, so let's choose the most likely subsystems (identified by the 535workload, so let's choose the most likely subsystems (identified by the
527string before the colon in the Tracepoint events) and do a 'perf stat' 536string before the colon in the ``Tracepoint`` events) and do a ``perf stat``
528run using only those wildcarded subsystems: :: 537run using only those subsystem wildcards::
529 538
530 root@crownbay:~# perf stat -e skb:* -e net:* -e napi:* -e sched:* -e workqueue:* -e irq:* -e syscalls:* wget http://downloads.yoctoproject.org/mirror/sources/linux-2.6.19.2.tar.bz2 539 root@crownbay:~# perf stat -e skb:* -e net:* -e napi:* -e sched:* -e workqueue:* -e irq:* -e syscalls:* wget &YOCTO_DL_URL;/mirror/sources/linux-2.6.19.2.tar.bz2
531 Performance counter stats for 'wget http://downloads.yoctoproject.org/mirror/sources/linux-2.6.19.2.tar.bz2': 540 Performance counter stats for 'wget &YOCTO_DL_URL;/mirror/sources/linux-2.6.19.2.tar.bz2':
532 541
533 23323 skb:kfree_skb 542 23323 skb:kfree_skb
534 0 skb:consume_skb 543 0 skb:consume_skb
@@ -587,17 +596,18 @@ run using only those wildcarded subsystems: ::
587 596
588 597
589Let's pick one of these tracepoints 598Let's pick one of these tracepoints
590and tell perf to do a profile using it as the sampling event: :: 599and tell perf to do a profile using it as the sampling event::
591 600
592 root@crownbay:~# perf record -g -e sched:sched_wakeup wget http://downloads.yoctoproject.org/mirror/sources/linux-2.6.19.2.tar.bz2 601 root@crownbay:~# perf record -g -e sched:sched_wakeup wget &YOCTO_DL_URL;/mirror/sources/linux-2.6.19.2.tar.bz2
593 602
594.. image:: figures/sched-wakeup-profile.png 603.. image:: figures/sched-wakeup-profile.png
595 :align: center 604 :align: center
605 :width: 70%
596 606
597The screenshot above shows the results of running a profile using 607The screenshot above shows the results of running a profile using
598sched:sched_switch tracepoint, which shows the relative costs of various 608sched:sched_switch tracepoint, which shows the relative costs of various
599paths to sched_wakeup (note that sched_wakeup is the name of the 609paths to ``sched_wakeup`` (note that ``sched_wakeup`` is the name of the
600tracepoint - it's actually defined just inside ttwu_do_wakeup(), which 610tracepoint --- it's actually defined just inside ``ttwu_do_wakeup()``, which
601accounts for the function name actually displayed in the profile: 611accounts for the function name actually displayed in the profile:
602 612
603.. code-block:: c 613.. code-block:: c
@@ -615,15 +625,15 @@ accounts for the function name actually displayed in the profile:
615 } 625 }
616 626
617A couple of the more interesting 627A couple of the more interesting
618callchains are expanded and displayed above, basically some network 628call chains are expanded and displayed above, basically some network
619receive paths that presumably end up waking up wget (busybox) when 629receive paths that presumably end up waking up wget (BusyBox) when
620network data is ready. 630network data is ready.
621 631
622Note that because tracepoints are normally used for tracing, the default 632Note that because tracepoints are normally used for tracing, the default
623sampling period for tracepoints is 1 i.e. for tracepoints perf will 633sampling period for tracepoints is ``1`` i.e. for tracepoints perf will
624sample on every event occurrence (this can be changed using the -c 634sample on every event occurrence (this can be changed using the ``-c``
625option). This is in contrast to hardware counters such as for example 635option). This is in contrast to hardware counters such as for example
626the default 'cycles' hardware counter used for normal profiling, where 636the default ``cycles`` hardware counter used for normal profiling, where
627sampling periods are much higher (in the thousands) because profiling 637sampling periods are much higher (in the thousands) because profiling
628should have as low an overhead as possible and sampling on every cycle 638should have as low an overhead as possible and sampling on every cycle
629would be prohibitively expensive. 639would be prohibitively expensive.
@@ -634,24 +644,24 @@ Using perf to do Basic Tracing
634Profiling is a great tool for solving many problems or for getting a 644Profiling is a great tool for solving many problems or for getting a
635high-level view of what's going on with a workload or across the system. 645high-level view of what's going on with a workload or across the system.
636It is however by definition an approximation, as suggested by the most 646It is however by definition an approximation, as suggested by the most
637prominent word associated with it, 'sampling'. On the one hand, it 647prominent word associated with it, ``sampling``. On the one hand, it
638allows a representative picture of what's going on in the system to be 648allows a representative picture of what's going on in the system to be
639cheaply taken, but on the other hand, that cheapness limits its utility 649cheaply taken, but alternatively, that cheapness limits its utility
640when that data suggests a need to 'dive down' more deeply to discover 650when that data suggests a need to "dive down" more deeply to discover
641what's really going on. In such cases, the only way to see what's really 651what's really going on. In such cases, the only way to see what's really
642going on is to be able to look at (or summarize more intelligently) the 652going on is to be able to look at (or summarize more intelligently) the
643individual steps that go into the higher-level behavior exposed by the 653individual steps that go into the higher-level behavior exposed by the
644coarse-grained profiling data. 654coarse-grained profiling data.
645 655
646As a concrete example, we can trace all the events we think might be 656As a concrete example, we can trace all the events we think might be
647applicable to our workload: :: 657applicable to our workload::
648 658
649 root@crownbay:~# perf record -g -e skb:* -e net:* -e napi:* -e sched:sched_switch -e sched:sched_wakeup -e irq:* 659 root@crownbay:~# perf record -g -e skb:* -e net:* -e napi:* -e sched:sched_switch -e sched:sched_wakeup -e irq:*
650 -e syscalls:sys_enter_read -e syscalls:sys_exit_read -e syscalls:sys_enter_write -e syscalls:sys_exit_write 660 -e syscalls:sys_enter_read -e syscalls:sys_exit_read -e syscalls:sys_enter_write -e syscalls:sys_exit_write
651 wget http://downloads.yoctoproject.org/mirror/sources/linux-2.6.19.2.tar.bz2 661 wget &YOCTO_DL_URL;/mirror/sources/linux-2.6.19.2.tar.bz2
652 662
653We can look at the raw trace output using 'perf script' with no 663We can look at the raw trace output using ``perf script`` with no
654arguments: :: 664arguments::
655 665
656 root@crownbay:~# perf script 666 root@crownbay:~# perf script
657 667
@@ -681,7 +691,7 @@ arguments: ::
681This gives us a detailed timestamped sequence of events that occurred within the 691This gives us a detailed timestamped sequence of events that occurred within the
682workload with respect to those events. 692workload with respect to those events.
683 693
684In many ways, profiling can be viewed as a subset of tracing - 694In many ways, profiling can be viewed as a subset of tracing ---
685theoretically, if you have a set of trace events that's sufficient to 695theoretically, if you have a set of trace events that's sufficient to
686capture all the important aspects of a workload, you can derive any of 696capture all the important aspects of a workload, you can derive any of
687the results or views that a profiling run can. 697the results or views that a profiling run can.
@@ -701,23 +711,23 @@ an infinite variety of ways.
701Another way to look at it is that there are only so many ways that the 711Another way to look at it is that there are only so many ways that the
702'primitive' counters can be used on their own to generate interesting 712'primitive' counters can be used on their own to generate interesting
703output; to get anything more complicated than simple counts requires 713output; to get anything more complicated than simple counts requires
704some amount of additional logic, which is typically very specific to the 714some amount of additional logic, which is typically specific to the
705problem at hand. For example, if we wanted to make use of a 'counter' 715problem at hand. For example, if we wanted to make use of a 'counter'
706that maps to the value of the time difference between when a process was 716that maps to the value of the time difference between when a process was
707scheduled to run on a processor and the time it actually ran, we 717scheduled to run on a processor and the time it actually ran, we
708wouldn't expect such a counter to exist on its own, but we could derive 718wouldn't expect such a counter to exist on its own, but we could derive
709one called say 'wakeup_latency' and use it to extract a useful view of 719one called say ``wakeup_latency`` and use it to extract a useful view of
710that metric from trace data. Likewise, we really can't figure out from 720that metric from trace data. Likewise, we really can't figure out from
711standard profiling tools how much data every process on the system reads 721standard profiling tools how much data every process on the system reads
712and writes, along with how many of those reads and writes fail 722and writes, along with how many of those reads and writes fail
713completely. If we have sufficient trace data, however, we could with the 723completely. If we have sufficient trace data, however, we could with the
714right tools easily extract and present that information, but we'd need 724right tools easily extract and present that information, but we'd need
715something other than pre-canned profiling tools to do that. 725something other than ready-made profiling tools to do that.
716 726
717Luckily, there is a general-purpose way to handle such needs, called 727Luckily, there is a general-purpose way to handle such needs, called
718'programming languages'. Making programming languages easily available 728"programming languages". Making programming languages easily available
719to apply to such problems given the specific format of data is called a 729to apply to such problems given the specific format of data is called a
720'programming language binding' for that data and language. Perf supports 730'programming language binding' for that data and language. perf supports
721two programming language bindings, one for Python and one for Perl. 731two programming language bindings, one for Python and one for Perl.
722 732
723.. admonition:: Tying it Together 733.. admonition:: Tying it Together
@@ -727,21 +737,21 @@ two programming language bindings, one for Python and one for Perl.
727 DProbes dpcc compiler, an ANSI C compiler which targeted a low-level 737 DProbes dpcc compiler, an ANSI C compiler which targeted a low-level
728 assembly language running on an in-kernel interpreter on the target 738 assembly language running on an in-kernel interpreter on the target
729 system. This is exactly analogous to what Sun's DTrace did, except 739 system. This is exactly analogous to what Sun's DTrace did, except
730 that DTrace invented its own language for the purpose. Systemtap, 740 that DTrace invented its own language for the purpose. SystemTap,
731 heavily inspired by DTrace, also created its own one-off language, 741 heavily inspired by DTrace, also created its own one-off language,
732 but rather than running the product on an in-kernel interpreter, 742 but rather than running the product on an in-kernel interpreter,
733 created an elaborate compiler-based machinery to translate its 743 created an elaborate compiler-based machinery to translate its
734 language into kernel modules written in C. 744 language into kernel modules written in C.
735 745
736Now that we have the trace data in perf.data, we can use 'perf script 746Now that we have the trace data in ``perf.data``, we can use ``perf script
737-g' to generate a skeleton script with handlers for the read/write 747-g`` to generate a skeleton script with handlers for the read / write
738entry/exit events we recorded: :: 748entry / exit events we recorded::
739 749
740 root@crownbay:~# perf script -g python 750 root@crownbay:~# perf script -g python
741 generated Python script: perf-script.py 751 generated Python script: perf-script.py
742 752
743The skeleton script simply creates a python function for each event type in the 753The skeleton script just creates a Python function for each event type in the
744perf.data file. The body of each function simply prints the event name along 754``perf.data`` file. The body of each function just prints the event name along
745with its parameters. For example: 755with its parameters. For example:
746 756
747.. code-block:: python 757.. code-block:: python
@@ -755,7 +765,7 @@ with its parameters. For example:
755 print "skbaddr=%u, len=%u, name=%s\n" % (skbaddr, len, name), 765 print "skbaddr=%u, len=%u, name=%s\n" % (skbaddr, len, name),
756 766
757We can run that script directly to print all of the events contained in the 767We can run that script directly to print all of the events contained in the
758perf.data file: :: 768``perf.data`` file::
759 769
760 root@crownbay:~# perf script -s perf-script.py 770 root@crownbay:~# perf script -s perf-script.py
761 771
@@ -784,8 +794,8 @@ perf.data file: ::
784 syscalls__sys_exit_read 1 11624.859944032 1262 wget nr=3, ret=1024 794 syscalls__sys_exit_read 1 11624.859944032 1262 wget nr=3, ret=1024
785 795
786That in itself isn't very useful; after all, we can accomplish pretty much the 796That in itself isn't very useful; after all, we can accomplish pretty much the
787same thing by simply running 'perf script' without arguments in the same 797same thing by just running ``perf script`` without arguments in the same
788directory as the perf.data file. 798directory as the ``perf.data`` file.
789 799
790We can however replace the print statements in the generated function 800We can however replace the print statements in the generated function
791bodies with whatever we want, and thereby make it infinitely more 801bodies with whatever we want, and thereby make it infinitely more
@@ -806,8 +816,8 @@ event. For example:
806 816
807Each event handler function in the generated code 817Each event handler function in the generated code
808is modified to do this. For convenience, we define a common function 818is modified to do this. For convenience, we define a common function
809called inc_counts() that each handler calls; inc_counts() simply tallies 819called ``inc_counts()`` that each handler calls; ``inc_counts()`` just tallies
810a count for each event using the 'counts' hash, which is a specialized 820a count for each event using the ``counts`` hash, which is a specialized
811hash function that does Perl-like autovivification, a capability that's 821hash function that does Perl-like autovivification, a capability that's
812extremely useful for kinds of multi-level aggregation commonly used in 822extremely useful for kinds of multi-level aggregation commonly used in
813processing traces (see perf's documentation on the Python language 823processing traces (see perf's documentation on the Python language
@@ -825,7 +835,7 @@ binding for details):
825 835
826Finally, at the end of the trace processing run, we want to print the 836Finally, at the end of the trace processing run, we want to print the
827result of all the per-event tallies. For that, we use the special 837result of all the per-event tallies. For that, we use the special
828'trace_end()' function: 838``trace_end()`` function:
829 839
830.. code-block:: python 840.. code-block:: python
831 841
@@ -833,7 +843,7 @@ result of all the per-event tallies. For that, we use the special
833 for event_name, count in counts.iteritems(): 843 for event_name, count in counts.iteritems():
834 print "%-40s %10s\n" % (event_name, count) 844 print "%-40s %10s\n" % (event_name, count)
835 845
836The end result is a summary of all the events recorded in the trace: :: 846The end result is a summary of all the events recorded in the trace::
837 847
838 skb__skb_copy_datagram_iovec 13148 848 skb__skb_copy_datagram_iovec 13148
839 irq__softirq_entry 4796 849 irq__softirq_entry 4796
@@ -854,56 +864,57 @@ The end result is a summary of all the events recorded in the trace: ::
854 syscalls__sys_exit_write 8990 864 syscalls__sys_exit_write 8990
855 865
856Note that this is 866Note that this is
857pretty much exactly the same information we get from 'perf stat', which 867pretty much exactly the same information we get from ``perf stat``, which
858goes a little way to support the idea mentioned previously that given 868goes a little way to support the idea mentioned previously that given
859the right kind of trace data, higher-level profiling-type summaries can 869the right kind of trace data, higher-level profiling-type summaries can
860be derived from it. 870be derived from it.
861 871
862Documentation on using the `'perf script' python 872Documentation on using the `'perf script' Python
863binding <https://linux.die.net/man/1/perf-script-python>`__. 873binding <https://linux.die.net/man/1/perf-script-python>`__.
864 874
865System-Wide Tracing and Profiling 875System-Wide Tracing and Profiling
866~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 876~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
867 877
868The examples so far have focused on tracing a particular program or 878The examples so far have focused on tracing a particular program or
869workload - in other words, every profiling run has specified the program 879workload --- that is, every profiling run has specified the program
870to profile in the command-line e.g. 'perf record wget ...'. 880to profile in the command-line e.g. ``perf record wget ...``.
871 881
872It's also possible, and more interesting in many cases, to run a 882It's also possible, and more interesting in many cases, to run a
873system-wide profile or trace while running the workload in a separate 883system-wide profile or trace while running the workload in a separate
874shell. 884shell.
875 885
876To do system-wide profiling or tracing, you typically use the -a flag to 886To do system-wide profiling or tracing, you typically use the ``-a`` flag to
877'perf record'. 887``perf record``.
878 888
879To demonstrate this, open up one window and start the profile using the 889To demonstrate this, open up one window and start the profile using the
880-a flag (press Ctrl-C to stop tracing): :: 890``-a`` flag (press ``Ctrl-C`` to stop tracing)::
881 891
882 root@crownbay:~# perf record -g -a 892 root@crownbay:~# perf record -g -a
883 ^C[ perf record: Woken up 6 times to write data ] 893 ^C[ perf record: Woken up 6 times to write data ]
884 [ perf record: Captured and wrote 1.400 MB perf.data (~61172 samples) ] 894 [ perf record: Captured and wrote 1.400 MB perf.data (~61172 samples) ]
885 895
886In another window, run the wget test: :: 896In another window, run the ``wget`` test::
887 897
888 root@crownbay:~# wget http://downloads.yoctoproject.org/mirror/sources/linux-2.6.19.2.tar.bz2 898 root@crownbay:~# wget &YOCTO_DL_URL;/mirror/sources/linux-2.6.19.2.tar.bz2
889 Connecting to downloads.yoctoproject.org (140.211.169.59:80) 899 Connecting to downloads.yoctoproject.org (140.211.169.59:80)
890 linux-2.6.19.2.tar.b 100% \|*******************************\| 41727k 0:00:00 ETA 900 linux-2.6.19.2.tar.b 100% \|*******************************\| 41727k 0:00:00 ETA
891 901
892Here we see entries not only for our wget load, but for 902Here we see entries not only for our ``wget`` load, but for
893other processes running on the system as well: 903other processes running on the system as well:
894 904
895.. image:: figures/perf-systemwide.png 905.. image:: figures/perf-systemwide.png
896 :align: center 906 :align: center
907 :width: 70%
897 908
898In the snapshot above, we can see callchains that originate in libc, and 909In the snapshot above, we can see call chains that originate in ``libc``, and
899a callchain from Xorg that demonstrates that we're using a proprietary X 910a call chain from ``Xorg`` that demonstrates that we're using a proprietary X
900driver in userspace (notice the presence of 'PVR' and some other 911driver in user space (notice the presence of ``PVR`` and some other
901unresolvable symbols in the expanded Xorg callchain). 912unresolvable symbols in the expanded ``Xorg`` call chain).
902 913
903Note also that we have both kernel and userspace entries in the above 914Note also that we have both kernel and user space entries in the above
904snapshot. We can also tell perf to focus on userspace but providing a 915snapshot. We can also tell perf to focus on user space but providing a
905modifier, in this case 'u', to the 'cycles' hardware counter when we 916modifier, in this case ``u``, to the ``cycles`` hardware counter when we
906record a profile: :: 917record a profile::
907 918
908 root@crownbay:~# perf record -g -a -e cycles:u 919 root@crownbay:~# perf record -g -a -e cycles:u
909 ^C[ perf record: Woken up 2 times to write data ] 920 ^C[ perf record: Woken up 2 times to write data ]
@@ -911,25 +922,27 @@ record a profile: ::
911 922
912.. image:: figures/perf-report-cycles-u.png 923.. image:: figures/perf-report-cycles-u.png
913 :align: center 924 :align: center
925 :width: 70%
914 926
915Notice in the screenshot above, we see only userspace entries ([.]) 927Notice in the screenshot above, we see only user space entries (``[.]``)
916 928
917Finally, we can press 'enter' on a leaf node and select the 'Zoom into 929Finally, we can press ``Enter`` on a leaf node and select the ``Zoom into
918DSO' menu item to show only entries associated with a specific DSO. In 930DSO`` menu item to show only entries associated with a specific DSO. In
919the screenshot below, we've zoomed into the 'libc' DSO which shows all 931the screenshot below, we've zoomed into the ``libc`` DSO which shows all
920the entries associated with the libc-xxx.so DSO. 932the entries associated with the ``libc-xxx.so`` DSO.
921 933
922.. image:: figures/perf-systemwide-libc.png 934.. image:: figures/perf-systemwide-libc.png
923 :align: center 935 :align: center
936 :width: 70%
924 937
925We can also use the system-wide -a switch to do system-wide tracing. 938We can also use the system-wide ``-a`` switch to do system-wide tracing.
926Here we'll trace a couple of scheduler events: :: 939Here we'll trace a couple of scheduler events::
927 940
928 root@crownbay:~# perf record -a -e sched:sched_switch -e sched:sched_wakeup 941 root@crownbay:~# perf record -a -e sched:sched_switch -e sched:sched_wakeup
929 ^C[ perf record: Woken up 38 times to write data ] 942 ^C[ perf record: Woken up 38 times to write data ]
930 [ perf record: Captured and wrote 9.780 MB perf.data (~427299 samples) ] 943 [ perf record: Captured and wrote 9.780 MB perf.data (~427299 samples) ]
931 944
932We can look at the raw output using 'perf script' with no arguments: :: 945We can look at the raw output using ``perf script`` with no arguments::
933 946
934 root@crownbay:~# perf script 947 root@crownbay:~# perf script
935 948
@@ -947,12 +960,12 @@ We can look at the raw output using 'perf script' with no arguments: ::
947Filtering 960Filtering
948^^^^^^^^^ 961^^^^^^^^^
949 962
950Notice that there are a lot of events that don't really have anything to 963Notice that there are many events that don't really have anything to
951do with what we're interested in, namely events that schedule 'perf' 964do with what we're interested in, namely events that schedule ``perf``
952itself in and out or that wake perf up. We can get rid of those by using 965itself in and out or that wake perf up. We can get rid of those by using
953the '--filter' option - for each event we specify using -e, we can add a 966the ``--filter`` option --- for each event we specify using ``-e``, we can add a
954--filter after that to filter out trace events that contain fields with 967``--filter`` after that to filter out trace events that contain fields with
955specific values: :: 968specific values::
956 969
957 root@crownbay:~# perf record -a -e sched:sched_switch --filter 'next_comm != perf && prev_comm != perf' -e sched:sched_wakeup --filter 'comm != perf' 970 root@crownbay:~# perf record -a -e sched:sched_switch --filter 'next_comm != perf && prev_comm != perf' -e sched:sched_wakeup --filter 'comm != perf'
958 ^C[ perf record: Woken up 38 times to write data ] 971 ^C[ perf record: Woken up 38 times to write data ]
@@ -977,16 +990,16 @@ specific values: ::
977 kworker/0:3 1209 [000] 7932.326214: sched_switch: prev_comm=kworker/0:3 prev_pid=1209 prev_prio=120 prev_state=S ==> next_comm=swapper/0 next_pid=0 next_prio=120 990 kworker/0:3 1209 [000] 7932.326214: sched_switch: prev_comm=kworker/0:3 prev_pid=1209 prev_prio=120 prev_state=S ==> next_comm=swapper/0 next_pid=0 next_prio=120
978 991
979In this case, we've filtered out all events that have 992In this case, we've filtered out all events that have
980'perf' in their 'comm' or 'comm_prev' or 'comm_next' fields. Notice that 993``perf`` in their ``comm``, ``comm_prev`` or ``comm_next`` fields. Notice that
981there are still events recorded for perf, but notice that those events 994there are still events recorded for perf, but notice that those events
982don't have values of 'perf' for the filtered fields. To completely 995don't have values of ``perf`` for the filtered fields. To completely
983filter out anything from perf will require a bit more work, but for the 996filter out anything from perf will require a bit more work, but for the
984purpose of demonstrating how to use filters, it's close enough. 997purpose of demonstrating how to use filters, it's close enough.
985 998
986.. admonition:: Tying it Together 999.. admonition:: Tying it Together
987 1000
988 These are exactly the same set of event filters defined by the trace 1001 These are exactly the same set of event filters defined by the trace
989 event subsystem. See the ftrace/tracecmd/kernelshark section for more 1002 event subsystem. See the ftrace / trace-cmd / KernelShark section for more
990 discussion about these event filters. 1003 discussion about these event filters.
991 1004
992.. admonition:: Tying it Together 1005.. admonition:: Tying it Together
@@ -996,14 +1009,14 @@ purpose of demonstrating how to use filters, it's close enough.
996 indispensable part of the perf design as it relates to tracing. 1009 indispensable part of the perf design as it relates to tracing.
997 kernel-based event filters provide a mechanism to precisely throttle 1010 kernel-based event filters provide a mechanism to precisely throttle
998 the event stream that appears in user space, where it makes sense to 1011 the event stream that appears in user space, where it makes sense to
999 provide bindings to real programming languages for postprocessing the 1012 provide bindings to real programming languages for post-processing the
1000 event stream. This architecture allows for the intelligent and 1013 event stream. This architecture allows for the intelligent and
1001 flexible partitioning of processing between the kernel and user 1014 flexible partitioning of processing between the kernel and user
1002 space. Contrast this with other tools such as SystemTap, which does 1015 space. Contrast this with other tools such as SystemTap, which does
1003 all of its processing in the kernel and as such requires a special 1016 all of its processing in the kernel and as such requires a special
1004 project-defined language in order to accommodate that design, or 1017 project-defined language in order to accommodate that design, or
1005 LTTng, where everything is sent to userspace and as such requires a 1018 LTTng, where everything is sent to user space and as such requires a
1006 super-efficient kernel-to-userspace transport mechanism in order to 1019 super-efficient kernel-to-user space transport mechanism in order to
1007 function properly. While perf certainly can benefit from for instance 1020 function properly. While perf certainly can benefit from for instance
1008 advances in the design of the transport, it doesn't fundamentally 1021 advances in the design of the transport, it doesn't fundamentally
1009 depend on them. Basically, if you find that your perf tracing 1022 depend on them. Basically, if you find that your perf tracing
@@ -1014,10 +1027,10 @@ Using Dynamic Tracepoints
1014~~~~~~~~~~~~~~~~~~~~~~~~~ 1027~~~~~~~~~~~~~~~~~~~~~~~~~
1015 1028
1016perf isn't restricted to the fixed set of static tracepoints listed by 1029perf isn't restricted to the fixed set of static tracepoints listed by
1017'perf list'. Users can also add their own 'dynamic' tracepoints anywhere 1030``perf list``. Users can also add their own "dynamic" tracepoints anywhere
1018in the kernel. For instance, suppose we want to define our own 1031in the kernel. For example, suppose we want to define our own
1019tracepoint on do_fork(). We can do that using the 'perf probe' perf 1032tracepoint on ``do_fork()``. We can do that using the ``perf probe`` perf
1020subcommand: :: 1033subcommand::
1021 1034
1022 root@crownbay:~# perf probe do_fork 1035 root@crownbay:~# perf probe do_fork
1023 Added new event: 1036 Added new event:
@@ -1028,10 +1041,10 @@ subcommand: ::
1028 perf record -e probe:do_fork -aR sleep 1 1041 perf record -e probe:do_fork -aR sleep 1
1029 1042
1030Adding a new tracepoint via 1043Adding a new tracepoint via
1031'perf probe' results in an event with all the expected files and format 1044``perf probe`` results in an event with all the expected files and format
1032in /sys/kernel/debug/tracing/events, just the same as for static 1045in ``/sys/kernel/debug/tracing/events``, just the same as for static
1033tracepoints (as discussed in more detail in the trace events subsystem 1046tracepoints (as discussed in more detail in the trace events subsystem
1034section: :: 1047section::
1035 1048
1036 root@crownbay:/sys/kernel/debug/tracing/events/probe/do_fork# ls -al 1049 root@crownbay:/sys/kernel/debug/tracing/events/probe/do_fork# ls -al
1037 drwxr-xr-x 2 root root 0 Oct 28 11:42 . 1050 drwxr-xr-x 2 root root 0 Oct 28 11:42 .
@@ -1045,32 +1058,32 @@ section: ::
1045 name: do_fork 1058 name: do_fork
1046 ID: 944 1059 ID: 944
1047 format: 1060 format:
1048 field:unsigned short common_type; offset:0; size:2; signed:0; 1061 field:unsigned short common_type; offset:0; size:2; signed:0;
1049 field:unsigned char common_flags; offset:2; size:1; signed:0; 1062 field:unsigned char common_flags; offset:2; size:1; signed:0;
1050 field:unsigned char common_preempt_count; offset:3; size:1; signed:0; 1063 field:unsigned char common_preempt_count; offset:3; size:1; signed:0;
1051 field:int common_pid; offset:4; size:4; signed:1; 1064 field:int common_pid; offset:4; size:4; signed:1;
1052 field:int common_padding; offset:8; size:4; signed:1; 1065 field:int common_padding; offset:8; size:4; signed:1;
1053 1066
1054 field:unsigned long __probe_ip; offset:12; size:4; signed:0; 1067 field:unsigned long __probe_ip; offset:12; size:4; signed:0;
1055 1068
1056 print fmt: "(%lx)", REC->__probe_ip 1069 print fmt: "(%lx)", REC->__probe_ip
1057 1070
1058We can list all dynamic tracepoints currently in 1071We can list all dynamic tracepoints currently in
1059existence: :: 1072existence::
1060 1073
1061 root@crownbay:~# perf probe -l 1074 root@crownbay:~# perf probe -l
1062 probe:do_fork (on do_fork) 1075 probe:do_fork (on do_fork)
1063 probe:schedule (on schedule) 1076 probe:schedule (on schedule)
1064 1077
1065Let's record system-wide ('sleep 30' is a 1078Let's record system-wide (``sleep 30`` is a
1066trick for recording system-wide but basically do nothing and then wake 1079trick for recording system-wide but basically do nothing and then wake
1067up after 30 seconds): :: 1080up after 30 seconds)::
1068 1081
1069 root@crownbay:~# perf record -g -a -e probe:do_fork sleep 30 1082 root@crownbay:~# perf record -g -a -e probe:do_fork sleep 30
1070 [ perf record: Woken up 1 times to write data ] 1083 [ perf record: Woken up 1 times to write data ]
1071 [ perf record: Captured and wrote 0.087 MB perf.data (~3812 samples) ] 1084 [ perf record: Captured and wrote 0.087 MB perf.data (~3812 samples) ]
1072 1085
1073Using 'perf script' we can see each do_fork event that fired: :: 1086Using ``perf script`` we can see each ``do_fork`` event that fired::
1074 1087
1075 root@crownbay:~# perf script 1088 root@crownbay:~# perf script
1076 1089
@@ -1111,71 +1124,73 @@ Using 'perf script' we can see each do_fork event that fired: ::
1111 matchbox-deskto 1311 [001] 34237.114106: do_fork: (c1028460) 1124 matchbox-deskto 1311 [001] 34237.114106: do_fork: (c1028460)
1112 gaku 1312 [000] 34237.202388: do_fork: (c1028460) 1125 gaku 1312 [000] 34237.202388: do_fork: (c1028460)
1113 1126
1114And using 'perf report' on the same file, we can see the 1127And using ``perf report`` on the same file, we can see the
1115callgraphs from starting a few programs during those 30 seconds: 1128call graphs from starting a few programs during those 30 seconds:
1116 1129
1117.. image:: figures/perf-probe-do_fork-profile.png 1130.. image:: figures/perf-probe-do_fork-profile.png
1118 :align: center 1131 :align: center
1132 :width: 70%
1119 1133
1120.. admonition:: Tying it Together 1134.. admonition:: Tying it Together
1121 1135
1122 The trace events subsystem accommodate static and dynamic tracepoints 1136 The trace events subsystem accommodate static and dynamic tracepoints
1123 in exactly the same way - there's no difference as far as the 1137 in exactly the same way --- there's no difference as far as the
1124 infrastructure is concerned. See the ftrace section for more details 1138 infrastructure is concerned. See the ftrace section for more details
1125 on the trace event subsystem. 1139 on the trace event subsystem.
1126 1140
1127.. admonition:: Tying it Together 1141.. admonition:: Tying it Together
1128 1142
1129 Dynamic tracepoints are implemented under the covers by kprobes and 1143 Dynamic tracepoints are implemented under the covers by Kprobes and
1130 uprobes. kprobes and uprobes are also used by and in fact are the 1144 Uprobes. Kprobes and Uprobes are also used by and in fact are the
1131 main focus of SystemTap. 1145 main focus of SystemTap.
1132 1146
1133Perf Documentation 1147perf Documentation
1134------------------ 1148------------------
1135 1149
1136Online versions of the man pages for the commands discussed in this 1150Online versions of the manual pages for the commands discussed in this
1137section can be found here: 1151section can be found here:
1138 1152
1139- The `'perf stat' manpage <https://linux.die.net/man/1/perf-stat>`__. 1153- The `'perf stat' manual page <https://linux.die.net/man/1/perf-stat>`__.
1140 1154
1141- The `'perf record' 1155- The `'perf record'
1142 manpage <https://linux.die.net/man/1/perf-record>`__. 1156 manual page <https://linux.die.net/man/1/perf-record>`__.
1143 1157
1144- The `'perf report' 1158- The `'perf report'
1145 manpage <https://linux.die.net/man/1/perf-report>`__. 1159 manual page <https://linux.die.net/man/1/perf-report>`__.
1146 1160
1147- The `'perf probe' manpage <https://linux.die.net/man/1/perf-probe>`__. 1161- The `'perf probe' manual page <https://linux.die.net/man/1/perf-probe>`__.
1148 1162
1149- The `'perf script' 1163- The `'perf script'
1150 manpage <https://linux.die.net/man/1/perf-script>`__. 1164 manual page <https://linux.die.net/man/1/perf-script>`__.
1151 1165
1152- Documentation on using the `'perf script' python 1166- Documentation on using the `'perf script' Python
1153 binding <https://linux.die.net/man/1/perf-script-python>`__. 1167 binding <https://linux.die.net/man/1/perf-script-python>`__.
1154 1168
1155- The top-level `perf(1) manpage <https://linux.die.net/man/1/perf>`__. 1169- The top-level `perf(1) manual page <https://linux.die.net/man/1/perf>`__.
1156 1170
1157Normally, you should be able to invoke the man pages via perf itself 1171Normally, you should be able to open the manual pages via perf itself
1158e.g. 'perf help' or 'perf help record'. 1172e.g. ``perf help`` or ``perf help record``.
1159 1173
1160However, by default Yocto doesn't install man pages, but perf invokes 1174To have the perf manual pages installed on your target, modify your
1161the man pages for most help functionality. This is a bug and is being 1175configuration as follows::
1162addressed by a Yocto bug: :yocto_bugs:`Bug 3388 - perf: enable man pages for
1163basic 'help' functionality </show_bug.cgi?id=3388>`.
1164 1176
1165The man pages in text form, along with some other files, such as a set 1177 IMAGE_INSTALL:append = " perf perf-doc"
1166of examples, can be found in the 'perf' directory of the kernel tree: :: 1178 DISTRO_FEATURES:append = " api-documentation"
1179
1180The manual pages in text form, along with some other files, such as a set
1181of examples, can also be found in the ``perf`` directory of the kernel tree::
1167 1182
1168 tools/perf/Documentation 1183 tools/perf/Documentation
1169 1184
1170There's also a nice perf tutorial on the perf 1185There's also a nice perf tutorial on the perf
1171wiki that goes into more detail than we do here in certain areas: `Perf 1186wiki that goes into more detail than we do here in certain areas: `perf
1172Tutorial <https://perf.wiki.kernel.org/index.php/Tutorial>`__ 1187Tutorial <https://perf.wiki.kernel.org/index.php/Tutorial>`__
1173 1188
1174ftrace 1189ftrace
1175====== 1190======
1176 1191
1177'ftrace' literally refers to the 'ftrace function tracer' but in reality 1192"ftrace" literally refers to the "ftrace function tracer" but in reality
1178this encompasses a number of related tracers along with the 1193this encompasses several related tracers along with the
1179infrastructure that they all make use of. 1194infrastructure that they all make use of.
1180 1195
1181ftrace Setup 1196ftrace Setup
@@ -1184,20 +1199,20 @@ ftrace Setup
1184For this section, we'll assume you've already performed the basic setup 1199For this section, we'll assume you've already performed the basic setup
1185outlined in the ":ref:`profile-manual/intro:General Setup`" section. 1200outlined in the ":ref:`profile-manual/intro:General Setup`" section.
1186 1201
1187ftrace, trace-cmd, and kernelshark run on the target system, and are 1202ftrace, trace-cmd, and KernelShark run on the target system, and are
1188ready to go out-of-the-box - no additional setup is necessary. For the 1203ready to go out-of-the-box --- no additional setup is necessary. For the
1189rest of this section we assume you've ssh'ed to the host and will be 1204rest of this section we assume you're connected to the host through SSH and
1190running ftrace on the target. kernelshark is a GUI application and if 1205will be running ftrace on the target. KernelShark is a GUI application and if
1191you use the '-X' option to ssh you can have the kernelshark GUI run on 1206you use the ``-X`` option to ``ssh`` you can have the KernelShark GUI run on
1192the target but display remotely on the host if you want. 1207the target but display remotely on the host if you want.
1193 1208
1194Basic ftrace usage 1209Basic ftrace usage
1195------------------ 1210------------------
1196 1211
1197'ftrace' essentially refers to everything included in the /tracing 1212"ftrace" essentially refers to everything included in the ``/tracing``
1198directory of the mounted debugfs filesystem (Yocto follows the standard 1213directory of the mounted debugfs filesystem (Yocto follows the standard
1199convention and mounts it at /sys/kernel/debug). Here's a listing of all 1214convention and mounts it at ``/sys/kernel/debug``). All the files found in
1200the files found in /sys/kernel/debug/tracing on a Yocto system: :: 1215``/sys/kernel/debug/tracing`` on a Yocto system are::
1201 1216
1202 root@sugarbay:/sys/kernel/debug/tracing# ls 1217 root@sugarbay:/sys/kernel/debug/tracing# ls
1203 README kprobe_events trace 1218 README kprobe_events trace
@@ -1213,7 +1228,7 @@ the files found in /sys/kernel/debug/tracing on a Yocto system: ::
1213 free_buffer set_graph_function 1228 free_buffer set_graph_function
1214 1229
1215The files listed above are used for various purposes 1230The files listed above are used for various purposes
1216- some relate directly to the tracers themselves, others are used to set 1231--- some relate directly to the tracers themselves, others are used to set
1217tracing options, and yet others actually contain the tracing output when 1232tracing options, and yet others actually contain the tracing output when
1218a tracer is in effect. Some of the functions can be guessed from their 1233a tracer is in effect. Some of the functions can be guessed from their
1219names, others need explanation; in any case, we'll cover some of the 1234names, others need explanation; in any case, we'll cover some of the
@@ -1222,32 +1237,32 @@ the ftrace documentation.
1222 1237
1223We'll start by looking at some of the available built-in tracers. 1238We'll start by looking at some of the available built-in tracers.
1224 1239
1225cat'ing the 'available_tracers' file lists the set of available tracers: :: 1240The ``available_tracers`` file lists the set of available tracers::
1226 1241
1227 root@sugarbay:/sys/kernel/debug/tracing# cat available_tracers 1242 root@sugarbay:/sys/kernel/debug/tracing# cat available_tracers
1228 blk function_graph function nop 1243 blk function_graph function nop
1229 1244
1230The 'current_tracer' file contains the tracer currently in effect: :: 1245The ``current_tracer`` file contains the tracer currently in effect::
1231 1246
1232 root@sugarbay:/sys/kernel/debug/tracing# cat current_tracer 1247 root@sugarbay:/sys/kernel/debug/tracing# cat current_tracer
1233 nop 1248 nop
1234 1249
1235The above listing of current_tracer shows that the 1250The above listing of ``current_tracer`` shows that the
1236'nop' tracer is in effect, which is just another way of saying that 1251``nop`` tracer is in effect, which is just another way of saying that
1237there's actually no tracer currently in effect. 1252there's actually no tracer currently in effect.
1238 1253
1239echo'ing one of the available_tracers into current_tracer makes the 1254Writing one of the available tracers into ``current_tracer`` makes the
1240specified tracer the current tracer: :: 1255specified tracer the current tracer::
1241 1256
1242 root@sugarbay:/sys/kernel/debug/tracing# echo function > current_tracer 1257 root@sugarbay:/sys/kernel/debug/tracing# echo function > current_tracer
1243 root@sugarbay:/sys/kernel/debug/tracing# cat current_tracer 1258 root@sugarbay:/sys/kernel/debug/tracing# cat current_tracer
1244 function 1259 function
1245 1260
1246The above sets the current tracer to be the 'function tracer'. This tracer 1261The above sets the current tracer to be the ``function`` tracer. This tracer
1247traces every function call in the kernel and makes it available as the 1262traces every function call in the kernel and makes it available as the
1248contents of the 'trace' file. Reading the 'trace' file lists the 1263contents of the ``trace`` file. Reading the ``trace`` file lists the
1249currently buffered function calls that have been traced by the function 1264currently buffered function calls that have been traced by the function
1250tracer: :: 1265tracer::
1251 1266
1252 root@sugarbay:/sys/kernel/debug/tracing# cat trace | less 1267 root@sugarbay:/sys/kernel/debug/tracing# cat trace | less
1253 1268
@@ -1292,7 +1307,7 @@ tracer: ::
1292 . 1307 .
1293 1308
1294Each line in the trace above shows what was happening in the kernel on a given 1309Each line in the trace above shows what was happening in the kernel on a given
1295cpu, to the level of detail of function calls. Each entry shows the function 1310CPU, to the level of detail of function calls. Each entry shows the function
1296called, followed by its caller (after the arrow). 1311called, followed by its caller (after the arrow).
1297 1312
1298The function tracer gives you an extremely detailed idea of what the 1313The function tracer gives you an extremely detailed idea of what the
@@ -1302,11 +1317,11 @@ great way to learn about how the kernel code works in a dynamic sense.
1302.. admonition:: Tying it Together 1317.. admonition:: Tying it Together
1303 1318
1304 The ftrace function tracer is also available from within perf, as the 1319 The ftrace function tracer is also available from within perf, as the
1305 ftrace:function tracepoint. 1320 ``ftrace:function`` tracepoint.
1306 1321
1307It is a little more difficult to follow the call chains than it needs to 1322It is a little more difficult to follow the call chains than it needs to
1308be - luckily there's a variant of the function tracer that displays the 1323be --- luckily there's a variant of the function tracer that displays the
1309callchains explicitly, called the 'function_graph' tracer: :: 1324call chains explicitly, called the ``function_graph`` tracer::
1310 1325
1311 root@sugarbay:/sys/kernel/debug/tracing# echo function_graph > current_tracer 1326 root@sugarbay:/sys/kernel/debug/tracing# echo function_graph > current_tracer
1312 root@sugarbay:/sys/kernel/debug/tracing# cat trace | less 1327 root@sugarbay:/sys/kernel/debug/tracing# cat trace | less
@@ -1421,11 +1436,11 @@ callchains explicitly, called the 'function_graph' tracer: ::
1421 3) + 13.784 us | } 1436 3) + 13.784 us | }
1422 3) | sys_ioctl() { 1437 3) | sys_ioctl() {
1423 1438
1424As you can see, the function_graph display is much easier 1439As you can see, the ``function_graph`` display is much easier
1425to follow. Also note that in addition to the function calls and 1440to follow. Also note that in addition to the function calls and
1426associated braces, other events such as scheduler events are displayed 1441associated braces, other events such as scheduler events are displayed
1427in context. In fact, you can freely include any tracepoint available in 1442in context. In fact, you can freely include any tracepoint available in
1428the trace events subsystem described in the next section by simply 1443the trace events subsystem described in the next section by just
1429enabling those events, and they'll appear in context in the function 1444enabling those events, and they'll appear in context in the function
1430graph display. Quite a powerful tool for understanding kernel dynamics. 1445graph display. Quite a powerful tool for understanding kernel dynamics.
1431 1446
@@ -1439,10 +1454,10 @@ The 'trace events' Subsystem
1439---------------------------- 1454----------------------------
1440 1455
1441One especially important directory contained within the 1456One especially important directory contained within the
1442/sys/kernel/debug/tracing directory is the 'events' subdirectory, which 1457``/sys/kernel/debug/tracing`` directory is the ``events`` subdirectory, which
1443contains representations of every tracepoint in the system. Listing out 1458contains representations of every tracepoint in the system. Listing out
1444the contents of the 'events' subdirectory, we see mainly another set of 1459the contents of the ``events`` subdirectory, we see mainly another set of
1445subdirectories: :: 1460subdirectories::
1446 1461
1447 root@sugarbay:/sys/kernel/debug/tracing# cd events 1462 root@sugarbay:/sys/kernel/debug/tracing# cd events
1448 root@sugarbay:/sys/kernel/debug/tracing/events# ls -al 1463 root@sugarbay:/sys/kernel/debug/tracing/events# ls -al
@@ -1489,9 +1504,9 @@ subdirectories: ::
1489 drwxr-xr-x 26 root root 0 Nov 14 23:19 writeback 1504 drwxr-xr-x 26 root root 0 Nov 14 23:19 writeback
1490 1505
1491Each one of these subdirectories 1506Each one of these subdirectories
1492corresponds to a 'subsystem' and contains yet again more subdirectories, 1507corresponds to a "subsystem" and contains yet again more subdirectories,
1493each one of those finally corresponding to a tracepoint. For example, 1508each one of those finally corresponding to a tracepoint. For example,
1494here are the contents of the 'kmem' subsystem: :: 1509here are the contents of the ``kmem`` subsystem::
1495 1510
1496 root@sugarbay:/sys/kernel/debug/tracing/events# cd kmem 1511 root@sugarbay:/sys/kernel/debug/tracing/events# cd kmem
1497 root@sugarbay:/sys/kernel/debug/tracing/events/kmem# ls -al 1512 root@sugarbay:/sys/kernel/debug/tracing/events/kmem# ls -al
@@ -1513,7 +1528,7 @@ here are the contents of the 'kmem' subsystem: ::
1513 drwxr-xr-x 2 root root 0 Nov 14 23:19 mm_page_pcpu_drain 1528 drwxr-xr-x 2 root root 0 Nov 14 23:19 mm_page_pcpu_drain
1514 1529
1515Let's see what's inside the subdirectory for a 1530Let's see what's inside the subdirectory for a
1516specific tracepoint, in this case the one for kmalloc: :: 1531specific tracepoint, in this case the one for ``kmalloc``::
1517 1532
1518 root@sugarbay:/sys/kernel/debug/tracing/events/kmem# cd kmalloc 1533 root@sugarbay:/sys/kernel/debug/tracing/events/kmem# cd kmalloc
1519 root@sugarbay:/sys/kernel/debug/tracing/events/kmem/kmalloc# ls -al 1534 root@sugarbay:/sys/kernel/debug/tracing/events/kmem/kmalloc# ls -al
@@ -1524,28 +1539,28 @@ specific tracepoint, in this case the one for kmalloc: ::
1524 -r--r--r-- 1 root root 0 Nov 14 23:19 format 1539 -r--r--r-- 1 root root 0 Nov 14 23:19 format
1525 -r--r--r-- 1 root root 0 Nov 14 23:19 id 1540 -r--r--r-- 1 root root 0 Nov 14 23:19 id
1526 1541
1527The 'format' file for the 1542The ``format`` file for the
1528tracepoint describes the event in memory, which is used by the various 1543tracepoint describes the event in memory, which is used by the various
1529tracing tools that now make use of these tracepoint to parse the event 1544tracing tools that now make use of these tracepoint to parse the event
1530and make sense of it, along with a 'print fmt' field that allows tools 1545and make sense of it, along with a ``print fmt`` field that allows tools
1531like ftrace to display the event as text. Here's what the format of the 1546like ftrace to display the event as text. The format of the
1532kmalloc event looks like: :: 1547``kmalloc`` event looks like::
1533 1548
1534 root@sugarbay:/sys/kernel/debug/tracing/events/kmem/kmalloc# cat format 1549 root@sugarbay:/sys/kernel/debug/tracing/events/kmem/kmalloc# cat format
1535 name: kmalloc 1550 name: kmalloc
1536 ID: 313 1551 ID: 313
1537 format: 1552 format:
1538 field:unsigned short common_type; offset:0; size:2; signed:0; 1553 field:unsigned short common_type; offset:0; size:2; signed:0;
1539 field:unsigned char common_flags; offset:2; size:1; signed:0; 1554 field:unsigned char common_flags; offset:2; size:1; signed:0;
1540 field:unsigned char common_preempt_count; offset:3; size:1; signed:0; 1555 field:unsigned char common_preempt_count; offset:3; size:1; signed:0;
1541 field:int common_pid; offset:4; size:4; signed:1; 1556 field:int common_pid; offset:4; size:4; signed:1;
1542 field:int common_padding; offset:8; size:4; signed:1; 1557 field:int common_padding; offset:8; size:4; signed:1;
1543 1558
1544 field:unsigned long call_site; offset:16; size:8; signed:0; 1559 field:unsigned long call_site; offset:16; size:8; signed:0;
1545 field:const void * ptr; offset:24; size:8; signed:0; 1560 field:const void * ptr; offset:24; size:8; signed:0;
1546 field:size_t bytes_req; offset:32; size:8; signed:0; 1561 field:size_t bytes_req; offset:32; size:8; signed:0;
1547 field:size_t bytes_alloc; offset:40; size:8; signed:0; 1562 field:size_t bytes_alloc; offset:40; size:8; signed:0;
1548 field:gfp_t gfp_flags; offset:48; size:4; signed:0; 1563 field:gfp_t gfp_flags; offset:48; size:4; signed:0;
1549 1564
1550 print fmt: "call_site=%lx ptr=%p bytes_req=%zu bytes_alloc=%zu gfp_flags=%s", REC->call_site, REC->ptr, REC->bytes_req, REC->bytes_alloc, 1565 print fmt: "call_site=%lx ptr=%p bytes_req=%zu bytes_alloc=%zu gfp_flags=%s", REC->call_site, REC->ptr, REC->bytes_req, REC->bytes_alloc,
1551 (REC->gfp_flags) ? __print_flags(REC->gfp_flags, "|", {(unsigned long)(((( gfp_t)0x10u) | (( gfp_t)0x40u) | (( gfp_t)0x80u) | (( 1566 (REC->gfp_flags) ? __print_flags(REC->gfp_flags, "|", {(unsigned long)(((( gfp_t)0x10u) | (( gfp_t)0x40u) | (( gfp_t)0x80u) | ((
@@ -1564,24 +1579,24 @@ kmalloc event looks like: ::
1564 long)(( gfp_t)0x08u), "GFP_MOVABLE"}, {(unsigned long)(( gfp_t)0), "GFP_NOTRACK"}, {(unsigned long)(( gfp_t)0x400000u), "GFP_NO_KSWAPD"}, 1579 long)(( gfp_t)0x08u), "GFP_MOVABLE"}, {(unsigned long)(( gfp_t)0), "GFP_NOTRACK"}, {(unsigned long)(( gfp_t)0x400000u), "GFP_NO_KSWAPD"},
1565 {(unsigned long)(( gfp_t)0x800000u), "GFP_OTHER_NODE"} ) : "GFP_NOWAIT" 1580 {(unsigned long)(( gfp_t)0x800000u), "GFP_OTHER_NODE"} ) : "GFP_NOWAIT"
1566 1581
1567The 'enable' file 1582The ``enable`` file
1568in the tracepoint directory is what allows the user (or tools such as 1583in the tracepoint directory is what allows the user (or tools such as
1569trace-cmd) to actually turn the tracepoint on and off. When enabled, the 1584``trace-cmd``) to actually turn the tracepoint on and off. When enabled, the
1570corresponding tracepoint will start appearing in the ftrace 'trace' file 1585corresponding tracepoint will start appearing in the ftrace ``trace`` file
1571described previously. For example, this turns on the kmalloc tracepoint: :: 1586described previously. For example, this turns on the ``kmalloc`` tracepoint::
1572 1587
1573 root@sugarbay:/sys/kernel/debug/tracing/events/kmem/kmalloc# echo 1 > enable 1588 root@sugarbay:/sys/kernel/debug/tracing/events/kmem/kmalloc# echo 1 > enable
1574 1589
1575At the moment, we're not interested in the function tracer or 1590At the moment, we're not interested in the function tracer or
1576some other tracer that might be in effect, so we first turn it off, but 1591some other tracer that might be in effect, so we first turn it off, but
1577if we do that, we still need to turn tracing on in order to see the 1592if we do that, we still need to turn tracing on in order to see the
1578events in the output buffer: :: 1593events in the output buffer::
1579 1594
1580 root@sugarbay:/sys/kernel/debug/tracing# echo nop > current_tracer 1595 root@sugarbay:/sys/kernel/debug/tracing# echo nop > current_tracer
1581 root@sugarbay:/sys/kernel/debug/tracing# echo 1 > tracing_on 1596 root@sugarbay:/sys/kernel/debug/tracing# echo 1 > tracing_on
1582 1597
1583Now, if we look at the the 'trace' file, we see nothing 1598Now, if we look at the ``trace`` file, we see nothing
1584but the kmalloc events we just turned on: :: 1599but the ``kmalloc`` events we just turned on::
1585 1600
1586 root@sugarbay:/sys/kernel/debug/tracing# cat trace | less 1601 root@sugarbay:/sys/kernel/debug/tracing# cat trace | less
1587 # tracer: nop 1602 # tracer: nop
@@ -1627,17 +1642,17 @@ but the kmalloc events we just turned on: ::
1627 <idle>-0 [000] ..s3 18156.400660: kmalloc: call_site=ffffffff81619b36 ptr=ffff88006d554800 bytes_req=512 bytes_alloc=512 gfp_flags=GFP_ATOMIC 1642 <idle>-0 [000] ..s3 18156.400660: kmalloc: call_site=ffffffff81619b36 ptr=ffff88006d554800 bytes_req=512 bytes_alloc=512 gfp_flags=GFP_ATOMIC
1628 matchbox-termin-1361 [001] ...1 18156.552800: kmalloc: call_site=ffffffff81614050 ptr=ffff88006db34800 bytes_req=576 bytes_alloc=1024 gfp_flags=GFP_KERNEL|GFP_REPEAT 1643 matchbox-termin-1361 [001] ...1 18156.552800: kmalloc: call_site=ffffffff81614050 ptr=ffff88006db34800 bytes_req=576 bytes_alloc=1024 gfp_flags=GFP_KERNEL|GFP_REPEAT
1629 1644
1630To again disable the kmalloc event, we need to send 0 to the enable file: :: 1645To again disable the ``kmalloc`` event, we need to send ``0`` to the ``enable`` file::
1631 1646
1632 root@sugarbay:/sys/kernel/debug/tracing/events/kmem/kmalloc# echo 0 > enable 1647 root@sugarbay:/sys/kernel/debug/tracing/events/kmem/kmalloc# echo 0 > enable
1633 1648
1634You can enable any number of events or complete subsystems (by 1649You can enable any number of events or complete subsystems (by
1635using the 'enable' file in the subsystem directory) and get an 1650using the ``enable`` file in the subsystem directory) and get an
1636arbitrarily fine-grained idea of what's going on in the system by 1651arbitrarily fine-grained idea of what's going on in the system by
1637enabling as many of the appropriate tracepoints as applicable. 1652enabling as many of the appropriate tracepoints as applicable.
1638 1653
1639A number of the tools described in this HOWTO do just that, including 1654Several tools described in this How-to do just that, including
1640trace-cmd and kernelshark in the next section. 1655``trace-cmd`` and KernelShark in the next section.
1641 1656
1642.. admonition:: Tying it Together 1657.. admonition:: Tying it Together
1643 1658
@@ -1645,129 +1660,132 @@ trace-cmd and kernelshark in the next section.
1645 ftrace, but by many of the other tools covered in this document and 1660 ftrace, but by many of the other tools covered in this document and
1646 they form a central point of integration for the various tracers 1661 they form a central point of integration for the various tracers
1647 available in Linux. They form a central part of the instrumentation 1662 available in Linux. They form a central part of the instrumentation
1648 for the following tools: perf, lttng, ftrace, blktrace and SystemTap 1663 for the following tools: perf, LTTng, ftrace, blktrace and SystemTap
1649 1664
1650.. admonition:: Tying it Together 1665.. admonition:: Tying it Together
1651 1666
1652 Eventually all the special-purpose tracers currently available in 1667 Eventually all the special-purpose tracers currently available in
1653 /sys/kernel/debug/tracing will be removed and replaced with 1668 ``/sys/kernel/debug/tracing`` will be removed and replaced with
1654 equivalent tracers based on the 'trace events' subsystem. 1669 equivalent tracers based on the "trace events" subsystem.
1655 1670
1656trace-cmd/kernelshark 1671trace-cmd / KernelShark
1657--------------------- 1672-----------------------
1658 1673
1659trace-cmd is essentially an extensive command-line 'wrapper' interface 1674trace-cmd is essentially an extensive command-line "wrapper" interface
1660that hides the details of all the individual files in 1675that hides the details of all the individual files in
1661/sys/kernel/debug/tracing, allowing users to specify specific particular 1676``/sys/kernel/debug/tracing``, allowing users to specify specific particular
1662events within the /sys/kernel/debug/tracing/events/ subdirectory and to 1677events within the ``/sys/kernel/debug/tracing/events/`` subdirectory and to
1663collect traces and avoid having to deal with those details directly. 1678collect traces and avoid having to deal with those details directly.
1664 1679
1665As yet another layer on top of that, kernelshark provides a GUI that 1680As yet another layer on top of that, KernelShark provides a GUI that
1666allows users to start and stop traces and specify sets of events using 1681allows users to start and stop traces and specify sets of events using
1667an intuitive interface, and view the output as both trace events and as 1682an intuitive interface, and view the output as both trace events and as
1668a per-CPU graphical display. It directly uses 'trace-cmd' as the 1683a per-CPU graphical display. It directly uses trace-cmd as the
1669plumbing that accomplishes all that underneath the covers (and actually 1684plumbing that accomplishes all that underneath the covers (and actually
1670displays the trace-cmd command it uses, as we'll see). 1685displays the trace-cmd command it uses, as we'll see).
1671 1686
1672To start a trace using kernelshark, first start kernelshark: :: 1687To start a trace using KernelShark, first start this tool::
1673 1688
1674 root@sugarbay:~# kernelshark 1689 root@sugarbay:~# kernelshark
1675 1690
1676Then bring up the 'Capture' dialog by 1691Then open up the ``Capture`` dialog by choosing from the KernelShark menu::
1677choosing from the kernelshark menu: ::
1678 1692
1679 Capture | Record 1693 Capture | Record
1680 1694
1681That will display the following dialog, which allows you to choose one or more 1695That will display the following dialog, which allows you to choose one or more
1682events (or even one or more complete subsystems) to trace: 1696events (or even entire subsystems) to trace:
1683 1697
1684.. image:: figures/kernelshark-choose-events.png 1698.. image:: figures/kernelshark-choose-events.png
1685 :align: center 1699 :align: center
1700 :width: 70%
1686 1701
1687Note that these are exactly the same sets of events described in the 1702Note that these are exactly the same sets of events described in the
1688previous trace events subsystem section, and in fact is where trace-cmd 1703previous trace events subsystem section, and in fact is where trace-cmd
1689gets them for kernelshark. 1704gets them for KernelShark.
1690 1705
1691In the above screenshot, we've decided to explore the graphics subsystem 1706In the above screenshot, we've decided to explore the graphics subsystem
1692a bit and so have chosen to trace all the tracepoints contained within 1707a bit and so have chosen to trace all the tracepoints contained within
1693the 'i915' and 'drm' subsystems. 1708the ``i915`` and ``drm`` subsystems.
1694 1709
1695After doing that, we can start and stop the trace using the 'Run' and 1710After doing that, we can start and stop the trace using the ``Run`` and
1696'Stop' button on the lower right corner of the dialog (the same button 1711``Stop`` button on the lower right corner of the dialog (the same button
1697will turn into the 'Stop' button after the trace has started): 1712will turn into the 'Stop' button after the trace has started):
1698 1713
1699.. image:: figures/kernelshark-output-display.png 1714.. image:: figures/kernelshark-output-display.png
1700 :align: center 1715 :align: center
1716 :width: 70%
1701 1717
1702Notice that the right-hand pane shows the exact trace-cmd command-line 1718Notice that the right pane shows the exact trace-cmd command-line
1703that's used to run the trace, along with the results of the trace-cmd 1719that's used to run the trace, along with the results of the trace-cmd
1704run. 1720run.
1705 1721
1706Once the 'Stop' button is pressed, the graphical view magically fills up 1722Once the ``Stop`` button is pressed, the graphical view magically fills up
1707with a colorful per-cpu display of the trace data, along with the 1723with a colorful per-CPU display of the trace data, along with the
1708detailed event listing below that: 1724detailed event listing below that:
1709 1725
1710.. image:: figures/kernelshark-i915-display.png 1726.. image:: figures/kernelshark-i915-display.png
1711 :align: center 1727 :align: center
1728 :width: 70%
1712 1729
1713Here's another example, this time a display resulting from tracing 'all 1730Here's another example, this time a display resulting from tracing ``all
1714events': 1731events``:
1715 1732
1716.. image:: figures/kernelshark-all.png 1733.. image:: figures/kernelshark-all.png
1717 :align: center 1734 :align: center
1735 :width: 70%
1718 1736
1719The tool is pretty self-explanatory, but for more detailed information 1737The tool is pretty self-explanatory, but for more detailed information
1720on navigating through the data, see the `kernelshark 1738on navigating through the data, see the `KernelShark
1721website <https://rostedt.homelinux.com/kernelshark/>`__. 1739website <https://kernelshark.org/Documentation.html>`__.
1722 1740
1723ftrace Documentation 1741ftrace Documentation
1724-------------------- 1742--------------------
1725 1743
1726The documentation for ftrace can be found in the kernel Documentation 1744The documentation for ftrace can be found in the kernel Documentation
1727directory: :: 1745directory::
1728 1746
1729 Documentation/trace/ftrace.txt 1747 Documentation/trace/ftrace.txt
1730 1748
1731The documentation for the trace event subsystem can also be found in the kernel 1749The documentation for the trace event subsystem can also be found in the kernel
1732Documentation directory: :: 1750Documentation directory::
1733 1751
1734 Documentation/trace/events.txt 1752 Documentation/trace/events.txt
1735 1753
1736There is a nice series of articles on using ftrace and trace-cmd at LWN: 1754A nice series of articles on using ftrace and trace-cmd are available at LWN:
1737 1755
1738- `Debugging the kernel using Ftrace - part 1756- `Debugging the kernel using ftrace - part
1739 1 <https://lwn.net/Articles/365835/>`__ 1757 1 <https://lwn.net/Articles/365835/>`__
1740 1758
1741- `Debugging the kernel using Ftrace - part 1759- `Debugging the kernel using ftrace - part
1742 2 <https://lwn.net/Articles/366796/>`__ 1760 2 <https://lwn.net/Articles/366796/>`__
1743 1761
1744- `Secrets of the Ftrace function 1762- `Secrets of the ftrace function
1745 tracer <https://lwn.net/Articles/370423/>`__ 1763 tracer <https://lwn.net/Articles/370423/>`__
1746 1764
1747- `trace-cmd: A front-end for 1765- `trace-cmd: A front-end for
1748 Ftrace <https://lwn.net/Articles/410200/>`__ 1766 ftrace <https://lwn.net/Articles/410200/>`__
1749 1767
1750There's more detailed documentation kernelshark usage here: 1768See also `KernelShark's documentation <https://kernelshark.org/Documentation.html>`__
1751`KernelShark <https://rostedt.homelinux.com/kernelshark/>`__ 1769for further usage details.
1752 1770
1753An amusing yet useful README (a tracing mini-HOWTO) can be found in 1771An amusing yet useful README (a tracing mini-How-to) can be found in
1754``/sys/kernel/debug/tracing/README``. 1772``/sys/kernel/debug/tracing/README``.
1755 1773
1756systemtap 1774SystemTap
1757========= 1775=========
1758 1776
1759SystemTap is a system-wide script-based tracing and profiling tool. 1777SystemTap is a system-wide script-based tracing and profiling tool.
1760 1778
1761SystemTap scripts are C-like programs that are executed in the kernel to 1779SystemTap scripts are C-like programs that are executed in the kernel to
1762gather/print/aggregate data extracted from the context they end up being 1780gather / print / aggregate data extracted from the context they end up being
1763invoked under. 1781called under.
1764 1782
1765For example, this probe from the `SystemTap 1783For example, this probe from the `SystemTap
1766tutorial <https://sourceware.org/systemtap/tutorial/>`__ simply prints a 1784tutorial <https://sourceware.org/systemtap/tutorial/>`__ just prints a
1767line every time any process on the system open()s a file. For each line, 1785line every time any process on the system runs ``open()`` on a file. For each line,
1768it prints the executable name of the program that opened the file, along 1786it prints the executable name of the program that opened the file, along
1769with its PID, and the name of the file it opened (or tried to open), 1787with its PID, and the name of the file it opened (or tried to open), which it
1770which it extracts from the open syscall's argstr. 1788extracts from the argument string (``argstr``) of the ``open`` system call.
1771 1789
1772.. code-block:: none 1790.. code-block:: none
1773 1791
@@ -1782,79 +1800,85 @@ which it extracts from the open syscall's argstr.
1782 } 1800 }
1783 1801
1784Normally, to execute this 1802Normally, to execute this
1785probe, you'd simply install systemtap on the system you want to probe, 1803probe, you'd just install SystemTap on the system you want to probe,
1786and directly run the probe on that system e.g. assuming the name of the 1804and directly run the probe on that system e.g. assuming the name of the
1787file containing the above text is trace_open.stp: :: 1805file containing the above text is ``trace_open.stp``::
1788 1806
1789 # stap trace_open.stp 1807 # stap trace_open.stp
1790 1808
1791What systemtap does under the covers to run this probe is 1) parse and 1809What SystemTap does under the covers to run this probe is 1) parse and
1792convert the probe to an equivalent 'C' form, 2) compile the 'C' form 1810convert the probe to an equivalent "C" form, 2) compile the "C" form
1793into a kernel module, 3) insert the module into the kernel, which arms 1811into a kernel module, 3) insert the module into the kernel, which arms
1794it, and 4) collect the data generated by the probe and display it to the 1812it, and 4) collect the data generated by the probe and display it to the
1795user. 1813user.
1796 1814
1797In order to accomplish steps 1 and 2, the 'stap' program needs access to 1815In order to accomplish steps 1 and 2, the ``stap`` program needs access to
1798the kernel build system that produced the kernel that the probed system 1816the kernel build system that produced the kernel that the probed system
1799is running. In the case of a typical embedded system (the 'target'), the 1817is running. In the case of a typical embedded system (the "target"), the
1800kernel build system unfortunately isn't typically part of the image 1818kernel build system unfortunately isn't typically part of the image
1801running on the target. It is normally available on the 'host' system 1819running on the target. It is normally available on the "host" system
1802that produced the target image however; in such cases, steps 1 and 2 are 1820that produced the target image however; in such cases, steps 1 and 2 are
1803executed on the host system, and steps 3 and 4 are executed on the 1821executed on the host system, and steps 3 and 4 are executed on the
1804target system, using only the systemtap 'runtime'. 1822target system, using only the SystemTap "runtime".
1805 1823
1806The systemtap support in Yocto assumes that only steps 3 and 4 are run 1824The SystemTap support in Yocto assumes that only steps 3 and 4 are run
1807on the target; it is possible to do everything on the target, but this 1825on the target; it is possible to do everything on the target, but this
1808section assumes only the typical embedded use-case. 1826section assumes only the typical embedded use-case.
1809 1827
1810So basically what you need to do in order to run a systemtap script on 1828Therefore, what you need to do in order to run a SystemTap script on
1811the target is to 1) on the host system, compile the probe into a kernel 1829the target is to 1) on the host system, compile the probe into a kernel
1812module that makes sense to the target, 2) copy the module onto the 1830module that makes sense to the target, 2) copy the module onto the
1813target system and 3) insert the module into the target kernel, which 1831target system and 3) insert the module into the target kernel, which
1814arms it, and 4) collect the data generated by the probe and display it 1832arms it, and 4) collect the data generated by the probe and display it
1815to the user. 1833to the user.
1816 1834
1817systemtap Setup 1835SystemTap Setup
1818--------------- 1836---------------
1819 1837
1820Those are a lot of steps and a lot of details, but fortunately Yocto 1838Those are many steps and details, but fortunately Yocto
1821includes a script called 'crosstap' that will take care of those 1839includes a script called ``crosstap`` that will take care of those
1822details, allowing you to simply execute a systemtap script on the remote 1840details, allowing you to just execute a SystemTap script on the remote
1823target, with arguments if necessary. 1841target, with arguments if necessary.
1824 1842
1825In order to do this from a remote host, however, you need to have access 1843In order to do this from a remote host, however, you need to have access
1826to the build for the image you booted. The 'crosstap' script provides 1844to the build for the image you booted. The ``crosstap`` script provides
1827details on how to do this if you run the script on the host without 1845details on how to do this if you run the script on the host without
1828having done a build: :: 1846having done a build::
1829 1847
1830 $ crosstap root@192.168.1.88 trace_open.stp 1848 $ crosstap root@192.168.1.88 trace_open.stp
1831 1849
1832 Error: No target kernel build found. 1850 Error: No target kernel build found.
1833 Did you forget to create a local build of your image? 1851 Did you forget to create a local build of your image?
1834 1852
1835 'crosstap' requires a local sdk build of the target system 1853'crosstap' requires a local SDK build of the target system
1836 (or a build that includes 'tools-profile') in order to build 1854(or a build that includes 'tools-profile') in order to build
1837 kernel modules that can probe the target system. 1855kernel modules that can probe the target system.
1838 1856
1839 Practically speaking, that means you need to do the following: 1857Practically speaking, that means you need to do the following:
1840 - If you're running a pre-built image, download the release 1858
1841 and/or BSP tarballs used to build the image. 1859- If you're running a pre-built image, download the release
1842 - If you're working from git sources, just clone the metadata 1860 and/or BSP tarballs used to build the image.
1843 and BSP layers needed to build the image you'll be booting. 1861
1844 - Make sure you're properly set up to build a new image (see 1862- If you're working from git sources, just clone the metadata
1845 the BSP README and/or the widely available basic documentation 1863 and BSP layers needed to build the image you'll be booting.
1846 that discusses how to build images). 1864
1847 - Build an -sdk version of the image e.g.: 1865- Make sure you're properly set up to build a new image (see
1848 $ bitbake core-image-sato-sdk 1866 the BSP README and/or the widely available basic documentation
1849 OR 1867 that discusses how to build images).
1850 - Build a non-sdk image but include the profiling tools: 1868
1851 [ edit local.conf and add 'tools-profile' to the end of 1869- Build an ``-sdk`` version of the image e.g.::
1852 the EXTRA_IMAGE_FEATURES variable ] 1870
1853 $ bitbake core-image-sato 1871 $ bitbake core-image-sato-sdk
1872
1873- Or build a non-SDK image but include the profiling tools
1874 (edit ``local.conf`` and add ``tools-profile`` to the end of
1875 :term:`EXTRA_IMAGE_FEATURES` variable)::
1876
1877 $ bitbake core-image-sato
1854 1878
1855 Once you've build the image on the host system, you're ready to 1879 Once you've build the image on the host system, you're ready to
1856 boot it (or the equivalent pre-built image) and use 'crosstap' 1880 boot it (or the equivalent pre-built image) and use ``crosstap``
1857 to probe it (you need to source the environment as usual first): 1881 to probe it (you need to source the environment as usual first)::
1858 1882
1859 $ source oe-init-build-env 1883 $ source oe-init-build-env
1860 $ cd ~/my/systemtap/scripts 1884 $ cd ~/my/systemtap/scripts
@@ -1862,30 +1886,28 @@ having done a build: ::
1862 1886
1863.. note:: 1887.. note::
1864 1888
1865 SystemTap, which uses 'crosstap', assumes you can establish an ssh 1889 SystemTap, which uses ``crosstap``, assumes you can establish an SSH
1866 connection to the remote target. Please refer to the crosstap wiki 1890 connection to the remote target. Please refer to the crosstap wiki
1867 page for details on verifying ssh connections at 1891 page for details on verifying SSH connections. Also, the ability to SSH
1868 . Also, the ability to ssh into the target system is not enabled by 1892 into the target system is not enabled by default in ``*-minimal`` images.
1869 default in \*-minimal images.
1870 1893
1871So essentially what you need to 1894Therefore, what you need to do is build an SDK image or image with
1872do is build an SDK image or image with 'tools-profile' as detailed in 1895``tools-profile`` as detailed in the ":ref:`profile-manual/intro:General Setup`"
1873the ":ref:`profile-manual/intro:General Setup`" section of this 1896section of this manual, and boot the resulting target image.
1874manual, and boot the resulting target image.
1875 1897
1876.. note:: 1898.. note::
1877 1899
1878 If you have a build directory containing multiple machines, you need 1900 If you have a :term:`Build Directory` containing multiple machines, you need
1879 to have the MACHINE you're connecting to selected in local.conf, and 1901 to have the :term:`MACHINE` you're connecting to selected in ``local.conf``, and
1880 the kernel in that machine's build directory must match the kernel on 1902 the kernel in that machine's :term:`Build Directory` must match the kernel on
1881 the booted system exactly, or you'll get the above 'crosstap' message 1903 the booted system exactly, or you'll get the above ``crosstap`` message
1882 when you try to invoke a script. 1904 when you try to call a script.
1883 1905
1884Running a Script on a Target 1906Running a Script on a Target
1885---------------------------- 1907----------------------------
1886 1908
1887Once you've done that, you should be able to run a systemtap script on 1909Once you've done that, you should be able to run a SystemTap script on
1888the target: :: 1910the target::
1889 1911
1890 $ cd /path/to/yocto 1912 $ cd /path/to/yocto
1891 $ source oe-init-build-env 1913 $ source oe-init-build-env
@@ -1900,26 +1922,25 @@ the target: ::
1900 meta-toolchain 1922 meta-toolchain
1901 meta-ide-support 1923 meta-ide-support
1902 1924
1903 You can also run generated qemu images with a command like 'runqemu qemux86-64' 1925 You can also run generated QEMU images with a command like 'runqemu qemux86-64'
1904 1926
1905Once you've done that, you can cd to whatever 1927Once you've done that, you can ``cd`` to whatever
1906directory contains your scripts and use 'crosstap' to run the script: :: 1928directory contains your scripts and use ``crosstap`` to run the script::
1907 1929
1908 $ cd /path/to/my/systemap/script 1930 $ cd /path/to/my/systemap/script
1909 $ crosstap root@192.168.7.2 trace_open.stp 1931 $ crosstap root@192.168.7.2 trace_open.stp
1910 1932
1911If you get an error connecting to the target e.g.: :: 1933If you get an error connecting to the target e.g.::
1912 1934
1913 $ crosstap root@192.168.7.2 trace_open.stp 1935 $ crosstap root@192.168.7.2 trace_open.stp
1914 error establishing ssh connection on remote 'root@192.168.7.2' 1936 error establishing ssh connection on remote 'root@192.168.7.2'
1915 1937
1916Try ssh'ing to the target and see what happens: :: 1938Try connecting to the target through SSH and see what happens::
1917 1939
1918 $ ssh root@192.168.7.2 1940 $ ssh root@192.168.7.2
1919 1941
1920A lot of the time, connection 1942Connection problems are often due specifying a wrong IP address or having a ``host key
1921problems are due specifying a wrong IP address or having a 'host key 1943verification error``.
1922verification error'.
1923 1944
1924If everything worked as planned, you should see something like this 1945If everything worked as planned, you should see something like this
1925(enter the password when prompted, or press enter if it's set up to use 1946(enter the password when prompted, or press enter if it's set up to use
@@ -1932,7 +1953,7 @@ no password):
1932 matchbox-termin(1036) open ("/tmp/vte3FS2LW", O_RDWR|O_CREAT|O_EXCL|O_LARGEFILE, 0600) 1953 matchbox-termin(1036) open ("/tmp/vte3FS2LW", O_RDWR|O_CREAT|O_EXCL|O_LARGEFILE, 0600)
1933 matchbox-termin(1036) open ("/tmp/vteJMC7LW", O_RDWR|O_CREAT|O_EXCL|O_LARGEFILE, 0600) 1954 matchbox-termin(1036) open ("/tmp/vteJMC7LW", O_RDWR|O_CREAT|O_EXCL|O_LARGEFILE, 0600)
1934 1955
1935systemtap Documentation 1956SystemTap Documentation
1936----------------------- 1957-----------------------
1937 1958
1938The SystemTap language reference can be found here: `SystemTap Language 1959The SystemTap language reference can be found here: `SystemTap Language
@@ -1945,7 +1966,7 @@ page <https://sourceware.org/systemtap/documentation.html>`__
1945Sysprof 1966Sysprof
1946======= 1967=======
1947 1968
1948Sysprof is a very easy to use system-wide profiler that consists of a 1969Sysprof is an easy to use system-wide profiler that consists of a
1949single window with three panes and a few buttons which allow you to 1970single window with three panes and a few buttons which allow you to
1950start, stop, and view the profile from one place. 1971start, stop, and view the profile from one place.
1951 1972
@@ -1955,41 +1976,43 @@ Sysprof Setup
1955For this section, we'll assume you've already performed the basic setup 1976For this section, we'll assume you've already performed the basic setup
1956outlined in the ":ref:`profile-manual/intro:General Setup`" section. 1977outlined in the ":ref:`profile-manual/intro:General Setup`" section.
1957 1978
1958Sysprof is a GUI-based application that runs on the target system. For 1979Sysprof is a GUI-based application that runs on the target system. For the rest
1959the rest of this document we assume you've ssh'ed to the host and will 1980of this document we assume you're connected to the host through SSH and will be
1960be running Sysprof on the target (you can use the '-X' option to ssh and 1981running Sysprof on the target (you can use the ``-X`` option to ``ssh`` and
1961have the Sysprof GUI run on the target but display remotely on the host 1982have the Sysprof GUI run on the target but display remotely on the host
1962if you want). 1983if you want).
1963 1984
1964Basic Sysprof Usage 1985Basic Sysprof Usage
1965------------------- 1986-------------------
1966 1987
1967To start profiling the system, you simply press the 'Start' button. To 1988To start profiling the system, you just press the ``Start`` button. To
1968stop profiling and to start viewing the profile data in one easy step, 1989stop profiling and to start viewing the profile data in one easy step,
1969press the 'Profile' button. 1990press the ``Profile`` button.
1970 1991
1971Once you've pressed the profile button, the three panes will fill up 1992Once you've pressed the profile button, the three panes will fill up
1972with profiling data: 1993with profiling data:
1973 1994
1974.. image:: figures/sysprof-copy-to-user.png 1995.. image:: figures/sysprof-copy-to-user.png
1975 :align: center 1996 :align: center
1997 :width: 70%
1976 1998
1977The left pane shows a list of functions and processes. Selecting one of 1999The left pane shows a list of functions and processes. Selecting one of
1978those expands that function in the right pane, showing all its callees. 2000those expands that function in the right pane, showing all its callees.
1979Note that this caller-oriented display is essentially the inverse of 2001Note that this caller-oriented display is essentially the inverse of
1980perf's default callee-oriented callchain display. 2002perf's default callee-oriented call chain display.
1981 2003
1982In the screenshot above, we're focusing on ``__copy_to_user_ll()`` and 2004In the screenshot above, we're focusing on ``__copy_to_user_ll()`` and
1983looking up the callchain we can see that one of the callers of 2005looking up the call chain we can see that one of the callers of
1984``__copy_to_user_ll`` is sys_read() and the complete callpath between them. 2006``__copy_to_user_ll`` is ``sys_read()`` and the complete call path between them.
1985Notice that this is essentially a portion of the same information we saw 2007Notice that this is essentially a portion of the same information we saw
1986in the perf display shown in the perf section of this page. 2008in the perf display shown in the perf section of this page.
1987 2009
1988.. image:: figures/sysprof-copy-from-user.png 2010.. image:: figures/sysprof-copy-from-user.png
1989 :align: center 2011 :align: center
2012 :width: 70%
1990 2013
1991Similarly, the above is a snapshot of the Sysprof display of a 2014Similarly, the above is a snapshot of the Sysprof display of a
1992copy-from-user callchain. 2015``copy-from-user`` call chain.
1993 2016
1994Finally, looking at the third Sysprof pane in the lower left, we can see 2017Finally, looking at the third Sysprof pane in the lower left, we can see
1995a list of all the callers of a particular function selected in the top 2018a list of all the callers of a particular function selected in the top
@@ -1998,24 +2021,24 @@ left pane. In this case, the lower pane is showing all the callers of
1998 2021
1999.. image:: figures/sysprof-callers.png 2022.. image:: figures/sysprof-callers.png
2000 :align: center 2023 :align: center
2024 :width: 70%
2001 2025
2002Double-clicking on one of those functions will in turn change the focus 2026Double-clicking on one of those functions will in turn change the focus
2003to the selected function, and so on. 2027to the selected function, and so on.
2004 2028
2005.. admonition:: Tying it Together 2029.. admonition:: Tying it Together
2006 2030
2007 If you like sysprof's 'caller-oriented' display, you may be able to 2031 If you like Sysprof's ``caller-oriented`` display, you may be able to
2008 approximate it in other tools as well. For example, 'perf report' has 2032 approximate it in other tools as well. For example, ``perf report`` has
2009 the -g (--call-graph) option that you can experiment with; one of the 2033 the ``-g`` (``--call-graph``) option that you can experiment with; one of the
2010 options is 'caller' for an inverted caller-based callgraph display. 2034 options is ``caller`` for an inverted caller-based call graph display.
2011 2035
2012Sysprof Documentation 2036Sysprof Documentation
2013--------------------- 2037---------------------
2014 2038
2015There doesn't seem to be any documentation for Sysprof, but maybe that's 2039There doesn't seem to be any documentation for Sysprof, but maybe that's
2016because it's pretty self-explanatory. The Sysprof website, however, is 2040because it's pretty self-explanatory. The Sysprof website, however, is here:
2017here: `Sysprof, System-wide Performance Profiler for 2041`Sysprof, System-wide Performance Profiler for Linux <http://sysprof.com/>`__
2018Linux <http://sysprof.com/>`__
2019 2042
2020LTTng (Linux Trace Toolkit, next generation) 2043LTTng (Linux Trace Toolkit, next generation)
2021============================================ 2044============================================
@@ -2025,20 +2048,20 @@ LTTng Setup
2025 2048
2026For this section, we'll assume you've already performed the basic setup 2049For this section, we'll assume you've already performed the basic setup
2027outlined in the ":ref:`profile-manual/intro:General Setup`" section. 2050outlined in the ":ref:`profile-manual/intro:General Setup`" section.
2028LTTng is run on the target system by ssh'ing to it. 2051LTTng is run on the target system by connecting to it through SSH.
2029 2052
2030Collecting and Viewing Traces 2053Collecting and Viewing Traces
2031----------------------------- 2054-----------------------------
2032 2055
2033Once you've applied the above commits and built and booted your image 2056Once you've applied the above commits and built and booted your image
2034(you need to build the core-image-sato-sdk image or use one of the other 2057(you need to build the ``core-image-sato-sdk`` image or use one of the other
2035methods described in the ":ref:`profile-manual/intro:General Setup`" section), you're ready to start 2058methods described in the ":ref:`profile-manual/intro:General Setup`" section), you're ready to start
2036tracing. 2059tracing.
2037 2060
2038Collecting and viewing a trace on the target (inside a shell) 2061Collecting and viewing a trace on the target (inside a shell)
2039~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2062~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2040 2063
2041First, from the host, ssh to the target: :: 2064First, from the host, connect to the target through SSH::
2042 2065
2043 $ ssh -l root 192.168.1.47 2066 $ ssh -l root 192.168.1.47
2044 The authenticity of host '192.168.1.47 (192.168.1.47)' can't be established. 2067 The authenticity of host '192.168.1.47 (192.168.1.47)' can't be established.
@@ -2047,30 +2070,30 @@ First, from the host, ssh to the target: ::
2047 Warning: Permanently added '192.168.1.47' (RSA) to the list of known hosts. 2070 Warning: Permanently added '192.168.1.47' (RSA) to the list of known hosts.
2048 root@192.168.1.47's password: 2071 root@192.168.1.47's password:
2049 2072
2050Once on the target, use these steps to create a trace: :: 2073Once on the target, use these steps to create a trace::
2051 2074
2052 root@crownbay:~# lttng create 2075 root@crownbay:~# lttng create
2053 Spawning a session daemon 2076 Spawning a session daemon
2054 Session auto-20121015-232120 created. 2077 Session auto-20121015-232120 created.
2055 Traces will be written in /home/root/lttng-traces/auto-20121015-232120 2078 Traces will be written in /home/root/lttng-traces/auto-20121015-232120
2056 2079
2057Enable the events you want to trace (in this case all kernel events): :: 2080Enable the events you want to trace (in this case all kernel events)::
2058 2081
2059 root@crownbay:~# lttng enable-event --kernel --all 2082 root@crownbay:~# lttng enable-event --kernel --all
2060 All kernel events are enabled in channel channel0 2083 All kernel events are enabled in channel channel0
2061 2084
2062Start the trace: :: 2085Start the trace::
2063 2086
2064 root@crownbay:~# lttng start 2087 root@crownbay:~# lttng start
2065 Tracing started for session auto-20121015-232120 2088 Tracing started for session auto-20121015-232120
2066 2089
2067And then stop the trace after awhile or after running a particular workload that 2090And then stop the trace after awhile or after running a particular workload that
2068you want to trace: :: 2091you want to trace::
2069 2092
2070 root@crownbay:~# lttng stop 2093 root@crownbay:~# lttng stop
2071 Tracing stopped for session auto-20121015-232120 2094 Tracing stopped for session auto-20121015-232120
2072 2095
2073You can now view the trace in text form on the target: :: 2096You can now view the trace in text form on the target::
2074 2097
2075 root@crownbay:~# lttng view 2098 root@crownbay:~# lttng view
2076 [23:21:56.989270399] (+?.?????????) sys_geteuid: { 1 }, { } 2099 [23:21:56.989270399] (+?.?????????) sys_geteuid: { 1 }, { }
@@ -2115,42 +2138,42 @@ You can now view the trace in text form on the target: ::
2115 . 2138 .
2116 2139
2117You can now safely destroy the trace 2140You can now safely destroy the trace
2118session (note that this doesn't delete the trace - it's still there in 2141session (note that this doesn't delete the trace --- it's still there in
2119~/lttng-traces): :: 2142``~/lttng-traces``)::
2120 2143
2121 root@crownbay:~# lttng destroy 2144 root@crownbay:~# lttng destroy
2122 Session auto-20121015-232120 destroyed at /home/root 2145 Session auto-20121015-232120 destroyed at /home/root
2123 2146
2124Note that the trace is saved in a directory of the same name as returned by 2147Note that the trace is saved in a directory of the same name as returned by
2125'lttng create', under the ~/lttng-traces directory (note that you can change this by 2148``lttng create``, under the ``~/lttng-traces`` directory (note that you can change this by
2126supplying your own name to 'lttng create'): :: 2149supplying your own name to ``lttng create``)::
2127 2150
2128 root@crownbay:~# ls -al ~/lttng-traces 2151 root@crownbay:~# ls -al ~/lttng-traces
2129 drwxrwx--- 3 root root 1024 Oct 15 23:21 . 2152 drwxrwx--- 3 root root 1024 Oct 15 23:21 .
2130 drwxr-xr-x 5 root root 1024 Oct 15 23:57 .. 2153 drwxr-xr-x 5 root root 1024 Oct 15 23:57 ..
2131 drwxrwx--- 3 root root 1024 Oct 15 23:21 auto-20121015-232120 2154 drwxrwx--- 3 root root 1024 Oct 15 23:21 auto-20121015-232120
2132 2155
2133Collecting and viewing a userspace trace on the target (inside a shell) 2156Collecting and viewing a user space trace on the target (inside a shell)
2134~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2157~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2135 2158
2136For LTTng userspace tracing, you need to have a properly instrumented 2159For LTTng user space tracing, you need to have a properly instrumented
2137userspace program. For this example, we'll use the 'hello' test program 2160user space program. For this example, we'll use the ``hello`` test program
2138generated by the lttng-ust build. 2161generated by the ``lttng-ust`` build.
2139 2162
2140The 'hello' test program isn't installed on the rootfs by the lttng-ust 2163The ``hello`` test program isn't installed on the root filesystem by the ``lttng-ust``
2141build, so we need to copy it over manually. First cd into the build 2164build, so we need to copy it over manually. First ``cd`` into the build
2142directory that contains the hello executable: :: 2165directory that contains the ``hello`` executable::
2143 2166
2144 $ cd build/tmp/work/core2_32-poky-linux/lttng-ust/2.0.5-r0/git/tests/hello/.libs 2167 $ cd build/tmp/work/core2_32-poky-linux/lttng-ust/2.0.5-r0/git/tests/hello/.libs
2145 2168
2146Copy that over to the target machine: :: 2169Copy that over to the target machine::
2147 2170
2148 $ scp hello root@192.168.1.20: 2171 $ scp hello root@192.168.1.20:
2149 2172
2150You now have the instrumented lttng 'hello world' test program on the 2173You now have the instrumented LTTng "hello world" test program on the
2151target, ready to test. 2174target, ready to test.
2152 2175
2153First, from the host, ssh to the target: :: 2176First, from the host, connect to the target through SSH::
2154 2177
2155 $ ssh -l root 192.168.1.47 2178 $ ssh -l root 192.168.1.47
2156 The authenticity of host '192.168.1.47 (192.168.1.47)' can't be established. 2179 The authenticity of host '192.168.1.47 (192.168.1.47)' can't be established.
@@ -2159,35 +2182,35 @@ First, from the host, ssh to the target: ::
2159 Warning: Permanently added '192.168.1.47' (RSA) to the list of known hosts. 2182 Warning: Permanently added '192.168.1.47' (RSA) to the list of known hosts.
2160 root@192.168.1.47's password: 2183 root@192.168.1.47's password:
2161 2184
2162Once on the target, use these steps to create a trace: :: 2185Once on the target, use these steps to create a trace::
2163 2186
2164 root@crownbay:~# lttng create 2187 root@crownbay:~# lttng create
2165 Session auto-20190303-021943 created. 2188 Session auto-20190303-021943 created.
2166 Traces will be written in /home/root/lttng-traces/auto-20190303-021943 2189 Traces will be written in /home/root/lttng-traces/auto-20190303-021943
2167 2190
2168Enable the events you want to trace (in this case all userspace events): :: 2191Enable the events you want to trace (in this case all user space events)::
2169 2192
2170 root@crownbay:~# lttng enable-event --userspace --all 2193 root@crownbay:~# lttng enable-event --userspace --all
2171 All UST events are enabled in channel channel0 2194 All UST events are enabled in channel channel0
2172 2195
2173Start the trace: :: 2196Start the trace::
2174 2197
2175 root@crownbay:~# lttng start 2198 root@crownbay:~# lttng start
2176 Tracing started for session auto-20190303-021943 2199 Tracing started for session auto-20190303-021943
2177 2200
2178Run the instrumented hello world program: :: 2201Run the instrumented "hello world" program::
2179 2202
2180 root@crownbay:~# ./hello 2203 root@crownbay:~# ./hello
2181 Hello, World! 2204 Hello, World!
2182 Tracing... done. 2205 Tracing... done.
2183 2206
2184And then stop the trace after awhile or after running a particular workload 2207And then stop the trace after awhile or after running a particular workload
2185that you want to trace: :: 2208that you want to trace::
2186 2209
2187 root@crownbay:~# lttng stop 2210 root@crownbay:~# lttng stop
2188 Tracing stopped for session auto-20190303-021943 2211 Tracing stopped for session auto-20190303-021943
2189 2212
2190You can now view the trace in text form on the target: :: 2213You can now view the trace in text form on the target::
2191 2214
2192 root@crownbay:~# lttng view 2215 root@crownbay:~# lttng view
2193 [02:31:14.906146544] (+?.?????????) hello:1424 ust_tests_hello:tptest: { cpu_id = 1 }, { intfield = 0, intfield2 = 0x0, longfield = 0, netintfield = 0, netintfieldhex = 0x0, arrfield1 = [ [0] = 1, [1] = 2, [2] = 3 ], arrfield2 = "test", _seqfield1_length = 4, seqfield1 = [ [0] = 116, [1] = 101, [2] = 115, [3] = 116 ], _seqfield2_length = 4, seqfield2 = "test", stringfield = "test", floatfield = 2222, doublefield = 2, boolfield = 1 } 2216 [02:31:14.906146544] (+?.?????????) hello:1424 ust_tests_hello:tptest: { cpu_id = 1 }, { intfield = 0, intfield2 = 0x0, longfield = 0, netintfield = 0, netintfieldhex = 0x0, arrfield1 = [ [0] = 1, [1] = 2, [2] = 3 ], arrfield2 = "test", _seqfield1_length = 4, seqfield1 = [ [0] = 116, [1] = 101, [2] = 115, [3] = 116 ], _seqfield2_length = 4, seqfield2 = "test", stringfield = "test", floatfield = 2222, doublefield = 2, boolfield = 1 }
@@ -2199,7 +2222,7 @@ You can now view the trace in text form on the target: ::
2199 . 2222 .
2200 2223
2201You can now safely destroy the trace session (note that this doesn't delete the 2224You can now safely destroy the trace session (note that this doesn't delete the
2202trace - it's still there in ~/lttng-traces): :: 2225trace --- it's still there in ``~/lttng-traces``)::
2203 2226
2204 root@crownbay:~# lttng destroy 2227 root@crownbay:~# lttng destroy
2205 Session auto-20190303-021943 destroyed at /home/root 2228 Session auto-20190303-021943 destroyed at /home/root
@@ -2237,27 +2260,27 @@ the entire blktrace and blkparse pipeline on the target, or you can run
2237blktrace in 'listen' mode on the target and have blktrace and blkparse 2260blktrace in 'listen' mode on the target and have blktrace and blkparse
2238collect and analyze the data on the host (see the 2261collect and analyze the data on the host (see the
2239":ref:`profile-manual/usage:Using blktrace Remotely`" section 2262":ref:`profile-manual/usage:Using blktrace Remotely`" section
2240below). For the rest of this section we assume you've ssh'ed to the host and 2263below). For the rest of this section we assume you've to the host through SSH
2241will be running blkrace on the target. 2264and will be running blktrace on the target.
2242 2265
2243Basic blktrace Usage 2266Basic blktrace Usage
2244-------------------- 2267--------------------
2245 2268
2246To record a trace, simply run the 'blktrace' command, giving it the name 2269To record a trace, just run the ``blktrace`` command, giving it the name
2247of the block device you want to trace activity on: :: 2270of the block device you want to trace activity on::
2248 2271
2249 root@crownbay:~# blktrace /dev/sdc 2272 root@crownbay:~# blktrace /dev/sdc
2250 2273
2251In another shell, execute a workload you want to trace. :: 2274In another shell, execute a workload you want to trace::
2252 2275
2253 root@crownbay:/media/sdc# rm linux-2.6.19.2.tar.bz2; wget http://downloads.yoctoproject.org/mirror/sources/linux-2.6.19.2.tar.bz2; sync 2276 root@crownbay:/media/sdc# rm linux-2.6.19.2.tar.bz2; wget &YOCTO_DL_URL;/mirror/sources/linux-2.6.19.2.tar.bz2; sync
2254 Connecting to downloads.yoctoproject.org (140.211.169.59:80) 2277 Connecting to downloads.yoctoproject.org (140.211.169.59:80)
2255 linux-2.6.19.2.tar.b 100% \|*******************************\| 41727k 0:00:00 ETA 2278 linux-2.6.19.2.tar.b 100% \|*******************************\| 41727k 0:00:00 ETA
2256 2279
2257Press Ctrl-C in the blktrace shell to stop the trace. It 2280Press ``Ctrl-C`` in the blktrace shell to stop the trace. It
2258will display how many events were logged, along with the per-cpu file 2281will display how many events were logged, along with the per-cpu file
2259sizes (blktrace records traces in per-cpu kernel buffers and simply 2282sizes (blktrace records traces in per-cpu kernel buffers and just
2260dumps them to userspace for blkparse to merge and sort later). :: 2283dumps them to user space for blkparse to merge and sort later)::
2261 2284
2262 ^C=== sdc === 2285 ^C=== sdc ===
2263 CPU 0: 7082 events, 332 KiB data 2286 CPU 0: 7082 events, 332 KiB data
@@ -2265,7 +2288,7 @@ dumps them to userspace for blkparse to merge and sort later). ::
2265 Total: 8660 events (dropped 0), 406 KiB data 2288 Total: 8660 events (dropped 0), 406 KiB data
2266 2289
2267If you examine the files saved to disk, you see multiple files, one per CPU and 2290If you examine the files saved to disk, you see multiple files, one per CPU and
2268with the device name as the first part of the filename: :: 2291with the device name as the first part of the filename::
2269 2292
2270 root@crownbay:~# ls -al 2293 root@crownbay:~# ls -al
2271 drwxr-xr-x 6 root root 1024 Oct 27 22:39 . 2294 drwxr-xr-x 6 root root 1024 Oct 27 22:39 .
@@ -2273,9 +2296,9 @@ with the device name as the first part of the filename: ::
2273 -rw-r--r-- 1 root root 339938 Oct 27 22:40 sdc.blktrace.0 2296 -rw-r--r-- 1 root root 339938 Oct 27 22:40 sdc.blktrace.0
2274 -rw-r--r-- 1 root root 75753 Oct 27 22:40 sdc.blktrace.1 2297 -rw-r--r-- 1 root root 75753 Oct 27 22:40 sdc.blktrace.1
2275 2298
2276To view the trace events, simply invoke 'blkparse' in the directory 2299To view the trace events, just call ``blkparse`` in the directory
2277containing the trace files, giving it the device name that forms the 2300containing the trace files, giving it the device name that forms the
2278first part of the filenames: :: 2301first part of the filenames::
2279 2302
2280 root@crownbay:~# blkparse sdc 2303 root@crownbay:~# blkparse sdc
2281 2304
@@ -2332,29 +2355,29 @@ first part of the filenames: ::
2332 8,32 1 0 58.516990819 0 m N cfq3551 put_queue 2355 8,32 1 0 58.516990819 0 m N cfq3551 put_queue
2333 2356
2334 CPU0 (sdc): 2357 CPU0 (sdc):
2335 Reads Queued: 0, 0KiB Writes Queued: 331, 26,284KiB 2358 Reads Queued: 0, 0KiB Writes Queued: 331, 26,284KiB
2336 Read Dispatches: 0, 0KiB Write Dispatches: 485, 40,484KiB 2359 Read Dispatches: 0, 0KiB Write Dispatches: 485, 40,484KiB
2337 Reads Requeued: 0 Writes Requeued: 0 2360 Reads Requeued: 0 Writes Requeued: 0
2338 Reads Completed: 0, 0KiB Writes Completed: 511, 41,000KiB 2361 Reads Completed: 0, 0KiB Writes Completed: 511, 41,000KiB
2339 Read Merges: 0, 0KiB Write Merges: 13, 160KiB 2362 Read Merges: 0, 0KiB Write Merges: 13, 160KiB
2340 Read depth: 0 Write depth: 2 2363 Read depth: 0 Write depth: 2
2341 IO unplugs: 23 Timer unplugs: 0 2364 IO unplugs: 23 Timer unplugs: 0
2342 CPU1 (sdc): 2365 CPU1 (sdc):
2343 Reads Queued: 0, 0KiB Writes Queued: 249, 15,800KiB 2366 Reads Queued: 0, 0KiB Writes Queued: 249, 15,800KiB
2344 Read Dispatches: 0, 0KiB Write Dispatches: 42, 1,600KiB 2367 Read Dispatches: 0, 0KiB Write Dispatches: 42, 1,600KiB
2345 Reads Requeued: 0 Writes Requeued: 0 2368 Reads Requeued: 0 Writes Requeued: 0
2346 Reads Completed: 0, 0KiB Writes Completed: 16, 1,084KiB 2369 Reads Completed: 0, 0KiB Writes Completed: 16, 1,084KiB
2347 Read Merges: 0, 0KiB Write Merges: 40, 276KiB 2370 Read Merges: 0, 0KiB Write Merges: 40, 276KiB
2348 Read depth: 0 Write depth: 2 2371 Read depth: 0 Write depth: 2
2349 IO unplugs: 30 Timer unplugs: 1 2372 IO unplugs: 30 Timer unplugs: 1
2350 2373
2351 Total (sdc): 2374 Total (sdc):
2352 Reads Queued: 0, 0KiB Writes Queued: 580, 42,084KiB 2375 Reads Queued: 0, 0KiB Writes Queued: 580, 42,084KiB
2353 Read Dispatches: 0, 0KiB Write Dispatches: 527, 42,084KiB 2376 Read Dispatches: 0, 0KiB Write Dispatches: 527, 42,084KiB
2354 Reads Requeued: 0 Writes Requeued: 0 2377 Reads Requeued: 0 Writes Requeued: 0
2355 Reads Completed: 0, 0KiB Writes Completed: 527, 42,084KiB 2378 Reads Completed: 0, 0KiB Writes Completed: 527, 42,084KiB
2356 Read Merges: 0, 0KiB Write Merges: 53, 436KiB 2379 Read Merges: 0, 0KiB Write Merges: 53, 436KiB
2357 IO unplugs: 53 Timer unplugs: 1 2380 IO unplugs: 53 Timer unplugs: 1
2358 2381
2359 Throughput (R/W): 0KiB/s / 719KiB/s 2382 Throughput (R/W): 0KiB/s / 719KiB/s
2360 Events (sdc): 6,592 entries 2383 Events (sdc): 6,592 entries
@@ -2365,15 +2388,15 @@ first part of the filenames: ::
2365The report shows each event that was 2388The report shows each event that was
2366found in the blktrace data, along with a summary of the overall block 2389found in the blktrace data, along with a summary of the overall block
2367I/O traffic during the run. You can look at the 2390I/O traffic during the run. You can look at the
2368`blkparse <https://linux.die.net/man/1/blkparse>`__ manpage to learn the 2391`blkparse <https://linux.die.net/man/1/blkparse>`__ manual page to learn the
2369meaning of each field displayed in the trace listing. 2392meaning of each field displayed in the trace listing.
2370 2393
2371Live Mode 2394Live Mode
2372~~~~~~~~~ 2395~~~~~~~~~
2373 2396
2374blktrace and blkparse are designed from the ground up to be able to 2397blktrace and blkparse are designed from the ground up to be able to
2375operate together in a 'pipe mode' where the stdout of blktrace can be 2398operate together in a "pipe mode" where the standard output of blktrace can be
2376fed directly into the stdin of blkparse: :: 2399fed directly into the standard input of blkparse::
2377 2400
2378 root@crownbay:~# blktrace /dev/sdc -o - | blkparse -i - 2401 root@crownbay:~# blktrace /dev/sdc -o - | blkparse -i -
2379 2402
@@ -2386,7 +2409,7 @@ identify and capture conditions of interest.
2386 2409
2387There's actually another blktrace command that implements the above 2410There's actually another blktrace command that implements the above
2388pipeline as a single command, so the user doesn't have to bother typing 2411pipeline as a single command, so the user doesn't have to bother typing
2389in the above command sequence: :: 2412in the above command sequence::
2390 2413
2391 root@crownbay:~# btrace /dev/sdc 2414 root@crownbay:~# btrace /dev/sdc
2392 2415
@@ -2400,39 +2423,40 @@ tracer writes to, blktrace provides a way to trace without perturbing
2400the traced device at all by providing native support for sending all 2423the traced device at all by providing native support for sending all
2401trace data over the network. 2424trace data over the network.
2402 2425
2403To have blktrace operate in this mode, start blktrace on the target 2426To have blktrace operate in this mode, start blktrace in server mode on the
2404system being traced with the -l option, along with the device to trace: :: 2427host system, which is going to store the captured data::
2405 2428
2406 root@crownbay:~# blktrace -l /dev/sdc 2429 $ blktrace -l
2407 server: waiting for connections... 2430 server: waiting for connections...
2408 2431
2409On the host system, use the -h option to connect to the target system, 2432On the target system that is going to be traced, start blktrace in client
2410also passing it the device to trace: :: 2433mode with the -h option to connect to the host system, also passing it the
2434device to trace::
2411 2435
2412 $ blktrace -d /dev/sdc -h 192.168.1.43 2436 root@crownbay:~# blktrace -d /dev/sdc -h 192.168.1.43
2413 blktrace: connecting to 192.168.1.43 2437 blktrace: connecting to 192.168.1.43
2414 blktrace: connected! 2438 blktrace: connected!
2415 2439
2416On the target system, you should see this: :: 2440On the host system, you should see this::
2417 2441
2418 server: connection from 192.168.1.43 2442 server: connection from 192.168.1.43
2419 2443
2420In another shell, execute a workload you want to trace. :: 2444In another shell, execute a workload you want to trace::
2421 2445
2422 root@crownbay:/media/sdc# rm linux-2.6.19.2.tar.bz2; wget http://downloads.yoctoproject.org/mirror/sources/linux-2.6.19.2.tar.bz2; sync 2446 root@crownbay:/media/sdc# rm linux-2.6.19.2.tar.bz2; wget &YOCTO_DL_URL;/mirror/sources/linux-2.6.19.2.tar.bz2; sync
2423 Connecting to downloads.yoctoproject.org (140.211.169.59:80) 2447 Connecting to downloads.yoctoproject.org (140.211.169.59:80)
2424 linux-2.6.19.2.tar.b 100% \|*******************************\| 41727k 0:00:00 ETA 2448 linux-2.6.19.2.tar.b 100% \|*******************************\| 41727k 0:00:00 ETA
2425 2449
2426When it's done, do a Ctrl-C on the host system to stop the 2450When it's done, do a ``Ctrl-C`` on the target system to stop the
2427trace: :: 2451trace::
2428 2452
2429 ^C=== sdc === 2453 ^C=== sdc ===
2430 CPU 0: 7691 events, 361 KiB data 2454 CPU 0: 7691 events, 361 KiB data
2431 CPU 1: 4109 events, 193 KiB data 2455 CPU 1: 4109 events, 193 KiB data
2432 Total: 11800 events (dropped 0), 554 KiB data 2456 Total: 11800 events (dropped 0), 554 KiB data
2433 2457
2434On the target system, you should also see a trace summary for the trace 2458On the host system, you should also see a trace summary for the trace
2435just ended: :: 2459just ended::
2436 2460
2437 server: end of run for 192.168.1.43:sdc 2461 server: end of run for 192.168.1.43:sdc
2438 === sdc === 2462 === sdc ===
@@ -2441,20 +2465,20 @@ just ended: ::
2441 Total: 11800 events (dropped 0), 554 KiB data 2465 Total: 11800 events (dropped 0), 554 KiB data
2442 2466
2443The blktrace instance on the host will 2467The blktrace instance on the host will
2444save the target output inside a hostname-timestamp directory: :: 2468save the target output inside a ``<hostname>-<timestamp>`` directory::
2445 2469
2446 $ ls -al 2470 $ ls -al
2447 drwxr-xr-x 10 root root 1024 Oct 28 02:40 . 2471 drwxr-xr-x 10 root root 1024 Oct 28 02:40 .
2448 drwxr-sr-x 4 root root 1024 Oct 26 18:24 .. 2472 drwxr-sr-x 4 root root 1024 Oct 26 18:24 ..
2449 drwxr-xr-x 2 root root 1024 Oct 28 02:40 192.168.1.43-2012-10-28-02:40:56 2473 drwxr-xr-x 2 root root 1024 Oct 28 02:40 192.168.1.43-2012-10-28-02:40:56
2450 2474
2451cd into that directory to see the output files: :: 2475``cd`` into that directory to see the output files::
2452 2476
2453 $ ls -l 2477 $ ls -l
2454 -rw-r--r-- 1 root root 369193 Oct 28 02:44 sdc.blktrace.0 2478 -rw-r--r-- 1 root root 369193 Oct 28 02:44 sdc.blktrace.0
2455 -rw-r--r-- 1 root root 197278 Oct 28 02:44 sdc.blktrace.1 2479 -rw-r--r-- 1 root root 197278 Oct 28 02:44 sdc.blktrace.1
2456 2480
2457And run blkparse on the host system using the device name: :: 2481And run blkparse on the host system using the device name::
2458 2482
2459 $ blkparse sdc 2483 $ blkparse sdc
2460 2484
@@ -2476,29 +2500,29 @@ And run blkparse on the host system using the device name: ::
2476 8,32 1 0 177.266696560 0 m N cfq1267 put_queue 2500 8,32 1 0 177.266696560 0 m N cfq1267 put_queue
2477 2501
2478 CPU0 (sdc): 2502 CPU0 (sdc):
2479 Reads Queued: 0, 0KiB Writes Queued: 270, 21,708KiB 2503 Reads Queued: 0, 0KiB Writes Queued: 270, 21,708KiB
2480 Read Dispatches: 59, 2,628KiB Write Dispatches: 495, 39,964KiB 2504 Read Dispatches: 59, 2,628KiB Write Dispatches: 495, 39,964KiB
2481 Reads Requeued: 0 Writes Requeued: 0 2505 Reads Requeued: 0 Writes Requeued: 0
2482 Reads Completed: 90, 2,752KiB Writes Completed: 543, 41,596KiB 2506 Reads Completed: 90, 2,752KiB Writes Completed: 543, 41,596KiB
2483 Read Merges: 0, 0KiB Write Merges: 9, 344KiB 2507 Read Merges: 0, 0KiB Write Merges: 9, 344KiB
2484 Read depth: 2 Write depth: 2 2508 Read depth: 2 Write depth: 2
2485 IO unplugs: 20 Timer unplugs: 1 2509 IO unplugs: 20 Timer unplugs: 1
2486 CPU1 (sdc): 2510 CPU1 (sdc):
2487 Reads Queued: 688, 2,752KiB Writes Queued: 381, 20,652KiB 2511 Reads Queued: 688, 2,752KiB Writes Queued: 381, 20,652KiB
2488 Read Dispatches: 31, 124KiB Write Dispatches: 59, 2,396KiB 2512 Read Dispatches: 31, 124KiB Write Dispatches: 59, 2,396KiB
2489 Reads Requeued: 0 Writes Requeued: 0 2513 Reads Requeued: 0 Writes Requeued: 0
2490 Reads Completed: 0, 0KiB Writes Completed: 11, 764KiB 2514 Reads Completed: 0, 0KiB Writes Completed: 11, 764KiB
2491 Read Merges: 598, 2,392KiB Write Merges: 88, 448KiB 2515 Read Merges: 598, 2,392KiB Write Merges: 88, 448KiB
2492 Read depth: 2 Write depth: 2 2516 Read depth: 2 Write depth: 2
2493 IO unplugs: 52 Timer unplugs: 0 2517 IO unplugs: 52 Timer unplugs: 0
2494 2518
2495 Total (sdc): 2519 Total (sdc):
2496 Reads Queued: 688, 2,752KiB Writes Queued: 651, 42,360KiB 2520 Reads Queued: 688, 2,752KiB Writes Queued: 651, 42,360KiB
2497 Read Dispatches: 90, 2,752KiB Write Dispatches: 554, 42,360KiB 2521 Read Dispatches: 90, 2,752KiB Write Dispatches: 554, 42,360KiB
2498 Reads Requeued: 0 Writes Requeued: 0 2522 Reads Requeued: 0 Writes Requeued: 0
2499 Reads Completed: 90, 2,752KiB Writes Completed: 554, 42,360KiB 2523 Reads Completed: 90, 2,752KiB Writes Completed: 554, 42,360KiB
2500 Read Merges: 598, 2,392KiB Write Merges: 97, 792KiB 2524 Read Merges: 598, 2,392KiB Write Merges: 97, 792KiB
2501 IO unplugs: 72 Timer unplugs: 1 2525 IO unplugs: 72 Timer unplugs: 1
2502 2526
2503 Throughput (R/W): 15KiB/s / 238KiB/s 2527 Throughput (R/W): 15KiB/s / 238KiB/s
2504 Events (sdc): 9,301 entries 2528 Events (sdc): 9,301 entries
@@ -2513,29 +2537,29 @@ Tracing Block I/O via 'ftrace'
2513It's also possible to trace block I/O using only 2537It's also possible to trace block I/O using only
2514:ref:`profile-manual/usage:The 'trace events' Subsystem`, which 2538:ref:`profile-manual/usage:The 'trace events' Subsystem`, which
2515can be useful for casual tracing if you don't want to bother dealing with the 2539can be useful for casual tracing if you don't want to bother dealing with the
2516userspace tools. 2540user space tools.
2517 2541
2518To enable tracing for a given device, use /sys/block/xxx/trace/enable, 2542To enable tracing for a given device, use ``/sys/block/xxx/trace/enable``,
2519where xxx is the device name. This for example enables tracing for 2543where ``xxx`` is the device name. This for example enables tracing for
2520/dev/sdc: :: 2544``/dev/sdc``::
2521 2545
2522 root@crownbay:/sys/kernel/debug/tracing# echo 1 > /sys/block/sdc/trace/enable 2546 root@crownbay:/sys/kernel/debug/tracing# echo 1 > /sys/block/sdc/trace/enable
2523 2547
2524Once you've selected the device(s) you want 2548Once you've selected the device(s) you want
2525to trace, selecting the 'blk' tracer will turn the blk tracer on: :: 2549to trace, selecting the ``blk`` tracer will turn the blk tracer on::
2526 2550
2527 root@crownbay:/sys/kernel/debug/tracing# cat available_tracers 2551 root@crownbay:/sys/kernel/debug/tracing# cat available_tracers
2528 blk function_graph function nop 2552 blk function_graph function nop
2529 2553
2530 root@crownbay:/sys/kernel/debug/tracing# echo blk > current_tracer 2554 root@crownbay:/sys/kernel/debug/tracing# echo blk > current_tracer
2531 2555
2532Execute the workload you're interested in: :: 2556Execute the workload you're interested in::
2533 2557
2534 root@crownbay:/sys/kernel/debug/tracing# cat /media/sdc/testfile.txt 2558 root@crownbay:/sys/kernel/debug/tracing# cat /media/sdc/testfile.txt
2535 2559
2536And look at the output (note here that we're using 'trace_pipe' instead of 2560And look at the output (note here that we're using ``trace_pipe`` instead of
2537trace to capture this trace - this allows us to wait around on the pipe 2561trace to capture this trace --- this allows us to wait around on the pipe
2538for data to appear): :: 2562for data to appear)::
2539 2563
2540 root@crownbay:/sys/kernel/debug/tracing# cat trace_pipe 2564 root@crownbay:/sys/kernel/debug/tracing# cat trace_pipe
2541 cat-3587 [001] d..1 3023.276361: 8,32 Q R 1699848 + 8 [cat] 2565 cat-3587 [001] d..1 3023.276361: 8,32 Q R 1699848 + 8 [cat]
@@ -2554,14 +2578,14 @@ for data to appear): ::
2554 cat-3587 [001] d..1 3023.276497: 8,32 m N cfq3587 activate rq, drv=1 2578 cat-3587 [001] d..1 3023.276497: 8,32 m N cfq3587 activate rq, drv=1
2555 cat-3587 [001] d..2 3023.276500: 8,32 D R 1699848 + 8 [cat] 2579 cat-3587 [001] d..2 3023.276500: 8,32 D R 1699848 + 8 [cat]
2556 2580
2557And this turns off tracing for the specified device: :: 2581And this turns off tracing for the specified device::
2558 2582
2559 root@crownbay:/sys/kernel/debug/tracing# echo 0 > /sys/block/sdc/trace/enable 2583 root@crownbay:/sys/kernel/debug/tracing# echo 0 > /sys/block/sdc/trace/enable
2560 2584
2561blktrace Documentation 2585blktrace Documentation
2562---------------------- 2586----------------------
2563 2587
2564Online versions of the man pages for the commands discussed in this 2588Online versions of the manual pages for the commands discussed in this
2565section can be found here: 2589section can be found here:
2566 2590
2567- https://linux.die.net/man/8/blktrace 2591- https://linux.die.net/man/8/blktrace
@@ -2570,8 +2594,8 @@ section can be found here:
2570 2594
2571- https://linux.die.net/man/8/btrace 2595- https://linux.die.net/man/8/btrace
2572 2596
2573The above manpages, along with manpages for the other blktrace utilities 2597The above manual pages, along with manuals for the other blktrace utilities
2574(btt, blkiomon, etc) can be found in the /doc directory of the blktrace 2598(``btt``, ``blkiomon``, etc) can be found in the ``/doc`` directory of the blktrace
2575tools git repo: :: 2599tools git repository::
2576 2600
2577 $ git clone git://git.kernel.dk/blktrace.git 2601 $ git clone git://git.kernel.dk/blktrace.git