summaryrefslogtreecommitdiffstats
path: root/documentation
diff options
context:
space:
mode:
authorScott Rifenbark <srifenbark@gmail.com>2016-04-15 10:17:35 -0700
committerRichard Purdie <richard.purdie@linuxfoundation.org>2016-04-18 16:28:25 +0100
commit9f970b6bc1061682df08e25da54d7f24cfb4189c (patch)
tree1d6c95b51488d94d9693ba17d55cdcc61d38b24f /documentation
parent1d93104d0eaeafae695e09edda8a858776d2b49f (diff)
downloadpoky-9f970b6bc1061682df08e25da54d7f24cfb4189c.tar.gz
dev-manual, profile-manual, ref-manual: Purging Oprofile stuff
Fixes [YOCTO #9264] Several occurrences of tools-profile and the like had to be dealt with. (From yocto-docs rev: 62f45579970f47d22dabe921a51c663059a04576) Signed-off-by: Scott Rifenbark <srifenbark@gmail.com> Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Diffstat (limited to 'documentation')
-rw-r--r--documentation/dev-manual/dev-manual-common-tasks.xml13
-rw-r--r--documentation/profile-manual/profile-manual-usage.xml534
-rw-r--r--documentation/ref-manual/ref-features.xml14
3 files changed, 15 insertions, 546 deletions
diff --git a/documentation/dev-manual/dev-manual-common-tasks.xml b/documentation/dev-manual/dev-manual-common-tasks.xml
index e97cc734f9..f926f1d477 100644
--- a/documentation/dev-manual/dev-manual-common-tasks.xml
+++ b/documentation/dev-manual/dev-manual-common-tasks.xml
@@ -336,11 +336,12 @@
336 DEPENDS_append_one = " foo" 336 DEPENDS_append_one = " foo"
337 DEPENDS_prepend_one = "foo " 337 DEPENDS_prepend_one = "foo "
338 </literallayout> 338 </literallayout>
339 As an actual example, here's a line from the recipe for 339 As an actual example, here's a line from the recipe
340 the OProfile profiler, which lists an extra build-time 340 for gnutls, which adds dependencies on
341 dependency when building specifically for 64-bit PowerPC: 341 "argp-standalone" when building with the musl C
342 library:
342 <literallayout class='monospaced'> 343 <literallayout class='monospaced'>
343 DEPENDS_append_powerpc64 = " libpfm4" 344 DEPENDS_append_libc-musl = " argp-standalone"
344 </literallayout> 345 </literallayout>
345 <note> 346 <note>
346 Avoiding "+=" and "=+" and using 347 Avoiding "+=" and "=+" and using
@@ -8216,7 +8217,9 @@
8216 SRCREV_pn-matchbox-panel-2 ?= "${AUTOREV}" 8217 SRCREV_pn-matchbox-panel-2 ?= "${AUTOREV}"
8217 SRCREV_pn-matchbox-themes-extra ?= "${AUTOREV}" 8218 SRCREV_pn-matchbox-themes-extra ?= "${AUTOREV}"
8218 SRCREV_pn-matchbox-terminal ?= "${AUTOREV}" 8219 SRCREV_pn-matchbox-terminal ?= "${AUTOREV}"
8219 SRCREV_pn-matchbox-wm ?= "${AUTOREV}" . 8220 SRCREV_pn-matchbox-wm ?= "${AUTOREV}"
8221 SRCREV_pn-settings-daemon ?= "${AUTOREV}"
8222 SRCREV_pn-screenshot ?= "${AUTOREV}"
8220 . 8223 .
8221 . 8224 .
8222 . 8225 .
diff --git a/documentation/profile-manual/profile-manual-usage.xml b/documentation/profile-manual/profile-manual-usage.xml
index 1359c82522..310e8f01c5 100644
--- a/documentation/profile-manual/profile-manual-usage.xml
+++ b/documentation/profile-manual/profile-manual-usage.xml
@@ -2228,540 +2228,6 @@
2228 </section> 2228 </section>
2229</section> 2229</section>
2230 2230
2231<section id='profile-manual-oprofile'>
2232 <title>oprofile</title>
2233
2234 <para>
2235 oprofile itself is a command-line application that runs on the
2236 target system.
2237 </para>
2238
2239 <section id='oprofile-setup'>
2240 <title>Setup</title>
2241
2242 <para>
2243 For this section, we'll assume you've already performed the
2244 basic setup outlined in the
2245 "<link linkend='profile-manual-general-setup'>General Setup</link>"
2246 section.
2247 </para>
2248
2249 <para>
2250 For the section that deals with running oprofile from the command-line,
2251 we assume you've ssh'ed to the host and will be running
2252 oprofile on the target.
2253 </para>
2254
2255 <para>
2256 oprofileui (oprofile-viewer) is a GUI-based program that runs
2257 on the host and interacts remotely with the target.
2258 See the oprofileui section for the exact steps needed to
2259 install oprofileui on the host.
2260 </para>
2261 </section>
2262
2263 <section id='oprofile-basic-usage'>
2264 <title>Basic Usage</title>
2265
2266 <para>
2267 Oprofile as configured in Yocto is a system-wide profiler
2268 (i.e. the version in Yocto doesn't yet make use of the
2269 perf_events interface which would allow it to profile
2270 specific processes and workloads). It relies on hardware
2271 counter support in the hardware (but can fall back to a
2272 timer-based mode), which means that it doesn't take
2273 advantage of tracepoints or other event sources for example.
2274 </para>
2275
2276 <para>
2277 It consists of a kernel module that collects samples and a
2278 userspace daemon that writes the sample data to disk.
2279 </para>
2280
2281 <para>
2282 The 'opcontrol' shell script is used for transparently
2283 managing these components and starting and stopping
2284 profiles, and the 'opreport' command is used to
2285 display the results.
2286 </para>
2287
2288 <para>
2289 The oprofile daemon should already be running, but before
2290 you start profiling, you may need to change some settings
2291 and some of these settings may require the daemon to not
2292 be running. One of these settings is the path to the
2293 vmlinux file, which you'll want to set using the --vmlinux
2294 option if you want the kernel profiled:
2295 <literallayout class='monospaced'>
2296 root@crownbay:~# opcontrol --vmlinux=/boot/vmlinux-`uname -r`
2297 The profiling daemon is currently active, so changes to the configuration
2298 will be used the next time you restart oprofile after a --shutdown or --deinit.
2299 </literallayout>
2300 You can check if vmlinux file: is set using opcontrol --status:
2301 <literallayout class='monospaced'>
2302 root@crownbay:~# opcontrol --status
2303 Daemon paused: pid 1334
2304 Separate options: library
2305 vmlinux file: none
2306 Image filter: none
2307 Call-graph depth: 6
2308 </literallayout>
2309 If it's not, you need to shutdown the daemon, add the setting
2310 and restart the daemon:
2311 <literallayout class='monospaced'>
2312 root@crownbay:~# opcontrol --shutdown
2313 Killing daemon.
2314
2315 root@crownbay:~# opcontrol --vmlinux=/boot/vmlinux-`uname -r`
2316 root@crownbay:~# opcontrol --start-daemon
2317 Using default event: CPU_CLK_UNHALTED:100000:0:1:1
2318 Using 2.6+ OProfile kernel interface.
2319 Reading module info.
2320 Using log file /var/lib/oprofile/samples/oprofiled.log
2321 Daemon started.
2322 </literallayout>
2323 If we check the status again we now see our updated settings:
2324 <literallayout class='monospaced'>
2325 root@crownbay:~# opcontrol --status
2326 Daemon paused: pid 1649
2327 Separate options: library
2328 vmlinux file: /boot/vmlinux-3.4.11-yocto-standard
2329 Image filter: none
2330 Call-graph depth: 6
2331 </literallayout>
2332 We're now in a position to run a profile. For that we use
2333 'opcontrol --start':
2334 <literallayout class='monospaced'>
2335 root@crownbay:~# opcontrol --start
2336 Profiler running.
2337 </literallayout>
2338 In another window, run our wget workload:
2339 <literallayout class='monospaced'>
2340 root@crownbay:~# rm linux-2.6.19.2.tar.bz2; wget <ulink url='http://downloads.yoctoproject.org/mirror/sources/linux-2.6.19.2.tar.bz2'>http://downloads.yoctoproject.org/mirror/sources/linux-2.6.19.2.tar.bz2</ulink>; sync
2341 Connecting to downloads.yoctoproject.org (140.211.169.59:80)
2342 linux-2.6.19.2.tar.b 100% |*******************************| 41727k 0:00:00 ETA
2343 </literallayout>
2344 To stop the profile we use 'opcontrol --shutdown', which not
2345 only stops the profile but shuts down the daemon as well:
2346 <literallayout class='monospaced'>
2347 root@crownbay:~# opcontrol --shutdown
2348 Stopping profiling.
2349 Killing daemon.
2350 </literallayout>
2351 Oprofile writes sample data to /var/lib/oprofile/samples,
2352 which you can look at if you're interested in seeing how the
2353 samples are structured. This is also interesting because
2354 it's related to how you dive down to get further details
2355 about specific executables in OProfile.
2356 </para>
2357
2358 <para>
2359 To see the default display output for a profile, simply type
2360 'opreport', which will show the results using the data in
2361 /var/lib/oprofile/samples:
2362 <literallayout class='monospaced'>
2363 root@crownbay:~# opreport
2364
2365 WARNING! The OProfile kernel driver reports sample buffer overflows.
2366 Such overflows can result in incorrect sample attribution, invalid sample
2367 files and other symptoms. See the oprofiled.log for details.
2368 You should adjust your sampling frequency to eliminate (or at least minimize)
2369 these overflows.
2370 CPU: Intel Architectural Perfmon, speed 1.3e+06 MHz (estimated)
2371 Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit mask of 0x00 (No unit mask) count 100000
2372 CPU_CLK_UNHALT...|
2373 samples| %|
2374 ------------------
2375 464365 79.8156 vmlinux-3.4.11-yocto-standard
2376 65108 11.1908 oprofiled
2377 CPU_CLK_UNHALT...|
2378 samples| %|
2379 ------------------
2380 64416 98.9372 oprofiled
2381 692 1.0628 libc-2.16.so
2382 36959 6.3526 no-vmlinux
2383 4378 0.7525 busybox
2384 CPU_CLK_UNHALT...|
2385 samples| %|
2386 ------------------
2387 2844 64.9612 libc-2.16.so
2388 1337 30.5391 busybox
2389 193 4.4084 ld-2.16.so
2390 2 0.0457 libnss_compat-2.16.so
2391 1 0.0228 libnsl-2.16.so
2392 1 0.0228 libnss_files-2.16.so
2393 4344 0.7467 bash
2394 CPU_CLK_UNHALT...|
2395 samples| %|
2396 ------------------
2397 2657 61.1648 bash
2398 1665 38.3287 libc-2.16.so
2399 18 0.4144 ld-2.16.so
2400 3 0.0691 libtinfo.so.5.9
2401 1 0.0230 libdl-2.16.so
2402 3118 0.5359 nf_conntrack
2403 686 0.1179 matchbox-terminal
2404 CPU_CLK_UNHALT...|
2405 samples| %|
2406 ------------------
2407 214 31.1953 libglib-2.0.so.0.3200.4
2408 114 16.6181 libc-2.16.so
2409 79 11.5160 libcairo.so.2.11200.2
2410 78 11.3703 libgdk-x11-2.0.so.0.2400.8
2411 51 7.4344 libpthread-2.16.so
2412 45 6.5598 libgobject-2.0.so.0.3200.4
2413 29 4.2274 libvte.so.9.2800.2
2414 25 3.6443 libX11.so.6.3.0
2415 19 2.7697 libxcb.so.1.1.0
2416 17 2.4781 libgtk-x11-2.0.so.0.2400.8
2417 12 1.7493 librt-2.16.so
2418 3 0.4373 libXrender.so.1.3.0
2419 671 0.1153 emgd
2420 411 0.0706 nf_conntrack_ipv4
2421 391 0.0672 iptable_nat
2422 378 0.0650 nf_nat
2423 263 0.0452 Xorg
2424 CPU_CLK_UNHALT...|
2425 samples| %|
2426 ------------------
2427 106 40.3042 Xorg
2428 53 20.1521 libc-2.16.so
2429 31 11.7871 libpixman-1.so.0.27.2
2430 26 9.8859 emgd_drv.so
2431 16 6.0837 libemgdsrv_um.so.1.5.15.3226
2432 11 4.1825 libEMGD2d.so.1.5.15.3226
2433 9 3.4221 libfb.so
2434 7 2.6616 libpthread-2.16.so
2435 1 0.3802 libudev.so.0.9.3
2436 1 0.3802 libdrm.so.2.4.0
2437 1 0.3802 libextmod.so
2438 1 0.3802 mouse_drv.so
2439 .
2440 .
2441 .
2442 9 0.0015 connmand
2443 CPU_CLK_UNHALT...|
2444 samples| %|
2445 ------------------
2446 4 44.4444 libglib-2.0.so.0.3200.4
2447 2 22.2222 libpthread-2.16.so
2448 1 11.1111 connmand
2449 1 11.1111 libc-2.16.so
2450 1 11.1111 librt-2.16.so
2451 6 0.0010 oprofile-server
2452 CPU_CLK_UNHALT...|
2453 samples| %|
2454 ------------------
2455 3 50.0000 libc-2.16.so
2456 1 16.6667 oprofile-server
2457 1 16.6667 libpthread-2.16.so
2458 1 16.6667 libglib-2.0.so.0.3200.4
2459 5 8.6e-04 gconfd-2
2460 CPU_CLK_UNHALT...|
2461 samples| %|
2462 ------------------
2463 2 40.0000 libdbus-1.so.3.7.2
2464 2 40.0000 libglib-2.0.so.0.3200.4
2465 1 20.0000 libc-2.16.so
2466 </literallayout>
2467 The output above shows the breakdown or samples by both
2468 number of samples and percentage for each executable.
2469 Within an executable, the sample counts are broken down
2470 further into executable and shared libraries (DSOs) used
2471 by the executable.
2472 </para>
2473
2474 <para>
2475 To get even more detailed breakdowns by function, we need to
2476 have the full paths to the DSOs, which we can get by
2477 using -f with opreport:
2478 <literallayout class='monospaced'>
2479 root@crownbay:~# opreport -f
2480
2481 CPU: Intel Architectural Perfmon, speed 1.3e+06 MHz (estimated)
2482 Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit mask of 0x00 (No unit mask) count 100000
2483 CPU_CLK_UNHALT...|
2484 samples| %|
2485
2486 464365 79.8156 /boot/vmlinux-3.4.11-yocto-standard
2487 65108 11.1908 /usr/bin/oprofiled
2488 CPU_CLK_UNHALT...|
2489 samples| %|
2490 ------------------
2491 64416 98.9372 /usr/bin/oprofiled
2492 692 1.0628 /lib/libc-2.16.so
2493 36959 6.3526 /no-vmlinux
2494 4378 0.7525 /bin/busybox
2495 CPU_CLK_UNHALT...|
2496 samples| %|
2497 ------------------
2498 2844 64.9612 /lib/libc-2.16.so
2499 1337 30.5391 /bin/busybox
2500 193 4.4084 /lib/ld-2.16.so
2501 2 0.0457 /lib/libnss_compat-2.16.so
2502 1 0.0228 /lib/libnsl-2.16.so
2503 1 0.0228 /lib/libnss_files-2.16.so
2504 4344 0.7467 /bin/bash
2505 CPU_CLK_UNHALT...|
2506 samples| %|
2507 ------------------
2508 2657 61.1648 /bin/bash
2509 1665 38.3287 /lib/libc-2.16.so
2510 18 0.4144 /lib/ld-2.16.so
2511 3 0.0691 /lib/libtinfo.so.5.9
2512 1 0.0230 /lib/libdl-2.16.so
2513 .
2514 .
2515 .
2516 </literallayout>
2517 Using the paths shown in the above output and the -l option to
2518 opreport, we can see all the functions that have hits in the
2519 profile and their sample counts and percentages. Here's a
2520 portion of what we get for the kernel:
2521 <literallayout class='monospaced'>
2522 root@crownbay:~# opreport -l /boot/vmlinux-3.4.11-yocto-standard
2523
2524 CPU: Intel Architectural Perfmon, speed 1.3e+06 MHz (estimated)
2525 Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit mask of 0x00 (No unit mask) count 100000
2526 samples % symbol name
2527 233981 50.3873 intel_idle
2528 15437 3.3243 rb_get_reader_page
2529 14503 3.1232 ring_buffer_consume
2530 14092 3.0347 mutex_spin_on_owner
2531 13024 2.8047 read_hpet
2532 8039 1.7312 sub_preempt_count
2533 7096 1.5281 ioread32
2534 6997 1.5068 add_preempt_count
2535 3985 0.8582 rb_advance_reader
2536 3488 0.7511 add_event_entry
2537 3303 0.7113 get_parent_ip
2538 3104 0.6684 rb_buffer_peek
2539 2960 0.6374 op_cpu_buffer_read_entry
2540 2614 0.5629 sync_buffer
2541 2545 0.5481 debug_smp_processor_id
2542 2456 0.5289 ohci_irq
2543 2397 0.5162 memset
2544 2349 0.5059 __copy_to_user_ll
2545 2185 0.4705 ring_buffer_event_length
2546 1918 0.4130 in_lock_functions
2547 1850 0.3984 __schedule
2548 1767 0.3805 __copy_from_user_ll_nozero
2549 1575 0.3392 rb_event_data_length
2550 1256 0.2705 memcpy
2551 1233 0.2655 system_call
2552 1213 0.2612 menu_select
2553 </literallayout>
2554 Notice that above we see an entry for the __copy_to_user_ll()
2555 function that we've looked at with other profilers as well.
2556 </para>
2557
2558 <para>
2559 Here's what we get when we do the same thing for the
2560 busybox executable:
2561 <literallayout class='monospaced'>
2562 CPU: Intel Architectural Perfmon, speed 1.3e+06 MHz (estimated)
2563 Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit mask of 0x00 (No unit mask) count 100000
2564 samples % image name symbol name
2565 349 8.4198 busybox retrieve_file_data
2566 308 7.4306 libc-2.16.so _IO_file_xsgetn
2567 283 6.8275 libc-2.16.so __read_nocancel
2568 235 5.6695 libc-2.16.so syscall
2569 233 5.6212 libc-2.16.so clearerr
2570 215 5.1870 libc-2.16.so fread
2571 181 4.3667 libc-2.16.so __write_nocancel
2572 158 3.8118 libc-2.16.so __underflow
2573 151 3.6429 libc-2.16.so _dl_addr
2574 150 3.6188 busybox progress_meter
2575 150 3.6188 libc-2.16.so __poll_nocancel
2576 148 3.5706 libc-2.16.so _IO_file_underflow@@GLIBC_2.1
2577 137 3.3052 busybox safe_poll
2578 125 3.0157 busybox bb_progress_update
2579 122 2.9433 libc-2.16.so __x86.get_pc_thunk.bx
2580 95 2.2919 busybox full_write
2581 81 1.9542 busybox safe_write
2582 77 1.8577 busybox xwrite
2583 72 1.7370 libc-2.16.so _IO_file_read
2584 71 1.7129 libc-2.16.so _IO_sgetn
2585 67 1.6164 libc-2.16.so poll
2586 52 1.2545 libc-2.16.so _IO_switch_to_get_mode
2587 45 1.0856 libc-2.16.so read
2588 34 0.8203 libc-2.16.so write
2589 32 0.7720 busybox monotonic_sec
2590 25 0.6031 libc-2.16.so vfprintf
2591 22 0.5308 busybox get_mono
2592 14 0.3378 ld-2.16.so strcmp
2593 14 0.3378 libc-2.16.so __x86.get_pc_thunk.cx
2594 .
2595 .
2596 .
2597 </literallayout>
2598 Since we recorded the profile with a callchain depth of 6, we
2599 should be able to see our __copy_to_user_ll() callchains in
2600 the output, and indeed we can if we search around a bit in
2601 the 'opreport --callgraph' output:
2602 <literallayout class='monospaced'>
2603 root@crownbay:~# opreport --callgraph /boot/vmlinux-3.4.11-yocto-standard
2604
2605 392 6.9639 vmlinux-3.4.11-yocto-standard sock_aio_read
2606 736 13.0751 vmlinux-3.4.11-yocto-standard __generic_file_aio_write
2607 3255 57.8255 vmlinux-3.4.11-yocto-standard inet_recvmsg
2608 785 0.1690 vmlinux-3.4.11-yocto-standard tcp_recvmsg
2609 1790 31.7940 vmlinux-3.4.11-yocto-standard local_bh_enable
2610 1238 21.9893 vmlinux-3.4.11-yocto-standard __kfree_skb
2611 992 17.6199 vmlinux-3.4.11-yocto-standard lock_sock_nested
2612 785 13.9432 vmlinux-3.4.11-yocto-standard tcp_recvmsg [self]
2613 525 9.3250 vmlinux-3.4.11-yocto-standard release_sock
2614 112 1.9893 vmlinux-3.4.11-yocto-standard tcp_cleanup_rbuf
2615 72 1.2789 vmlinux-3.4.11-yocto-standard skb_copy_datagram_iovec
2616
2617 170 0.0366 vmlinux-3.4.11-yocto-standard skb_copy_datagram_iovec
2618 1491 73.3038 vmlinux-3.4.11-yocto-standard memcpy_toiovec
2619 327 16.0767 vmlinux-3.4.11-yocto-standard skb_copy_datagram_iovec
2620 170 8.3579 vmlinux-3.4.11-yocto-standard skb_copy_datagram_iovec [self]
2621 20 0.9833 vmlinux-3.4.11-yocto-standard copy_to_user
2622
2623 2588 98.2909 vmlinux-3.4.11-yocto-standard copy_to_user
2624 2349 0.5059 vmlinux-3.4.11-yocto-standard __copy_to_user_ll
2625 2349 89.2138 vmlinux-3.4.11-yocto-standard __copy_to_user_ll [self]
2626 166 6.3046 vmlinux-3.4.11-yocto-standard do_page_fault
2627 </literallayout>
2628 Remember that by default OProfile sessions are cumulative
2629 i.e. if you start and stop a profiling session, then start a
2630 new one, the new one will not erase the previous run(s) but
2631 will build on it. If you want to restart a profile from scratch,
2632 you need to reset:
2633 <literallayout class='monospaced'>
2634 root@crownbay:~# opcontrol --reset
2635 </literallayout>
2636 </para>
2637 </section>
2638
2639 <section id='oprofileui-a-gui-for-oprofile'>
2640 <title>OProfileUI - A GUI for OProfile</title>
2641
2642 <para>
2643 Yocto also supports a graphical UI for controlling and viewing
2644 OProfile traces, called OProfileUI. To use it, you first need
2645 to clone the oprofileui git repo, then configure, build, and
2646 install it:
2647 <literallayout class='monospaced'>
2648 [trz@empanada tmp]$ git clone git://git.yoctoproject.org/oprofileui
2649 [trz@empanada tmp]$ cd oprofileui
2650 [trz@empanada oprofileui]$ ./autogen.sh
2651 [trz@empanada oprofileui]$ sudo make install
2652 </literallayout>
2653 OprofileUI replaces the 'opreport' functionality with a GUI,
2654 and normally doesn't require the user to use 'opcontrol' either.
2655 If you want to profile the kernel, however, you need to either
2656 use the UI to specify a vmlinux or use 'opcontrol' to specify
2657 it on the target:
2658 </para>
2659
2660 <para>
2661 First, on the target, check if vmlinux file: is set:
2662 <literallayout class='monospaced'>
2663 root@crownbay:~# opcontrol --status
2664 </literallayout>
2665 If not:
2666 <literallayout class='monospaced'>
2667 root@crownbay:~# opcontrol --shutdown
2668 root@crownbay:~# opcontrol --vmlinux=/boot/vmlinux-`uname -r`
2669 root@crownbay:~# opcontrol --start-daemon
2670 </literallayout>
2671 Now, start the oprofile UI on the host system:
2672 <literallayout class='monospaced'>
2673 [trz@empanada oprofileui]$ oprofile-viewer
2674 </literallayout>
2675 To run a profile on the remote system, first connect to the
2676 remote system by pressing the 'Connect' button and supplying
2677 the IP address and port of the remote system (the default
2678 port is 4224).
2679 </para>
2680
2681 <para>
2682 The oprofile server should automatically be started already.
2683 If not, the connection will fail and you either typed in the
2684 wrong IP address and port (see below), or you need to start
2685 the server yourself:
2686 <literallayout class='monospaced'>
2687 root@crownbay:~# oprofile-server
2688 </literallayout>
2689 Or, to specify a specific port:
2690 <literallayout class='monospaced'>
2691 root@crownbay:~# oprofile-server --port 8888
2692 </literallayout>
2693 Once connected, press the 'Start' button and then run the
2694 wget workload on the remote system:
2695 <literallayout class='monospaced'>
2696 root@crownbay:~# rm linux-2.6.19.2.tar.bz2; wget <ulink url='http://downloads.yoctoproject.org/mirror/sources/linux-2.6.19.2.tar.bz2'>http://downloads.yoctoproject.org/mirror/sources/linux-2.6.19.2.tar.bz2</ulink>; sync
2697 Connecting to downloads.yoctoproject.org (140.211.169.59:80)
2698 linux-2.6.19.2.tar.b 100% |*******************************| 41727k 0:00:00 ETA
2699 </literallayout>
2700 Once the workload completes, press the 'Stop' button. At that
2701 point the OProfile viewer will download the profile files it's
2702 collected (this may take some time, especially if the kernel
2703 was profiled). While it downloads the files, you should see
2704 something like the following:
2705 </para>
2706
2707 <para>
2708 <imagedata fileref="figures/oprofileui-downloading.png" width="6in" depth="7in" align="center" scalefit="1" />
2709 </para>
2710
2711 <para>
2712 Once the profile files have been retrieved, you should see a
2713 list of the processes that were profiled:
2714 </para>
2715
2716 <para>
2717 <imagedata fileref="figures/oprofileui-processes.png" width="6in" depth="7in" align="center" scalefit="1" />
2718 </para>
2719
2720 <para>
2721 If you select one of them, you should see all the symbols that
2722 were hit during the profile. Selecting one of them will show a
2723 list of callers and callees of the chosen function in two
2724 panes below the top pane. For example, here's what we see
2725 when we select __copy_to_user_ll():
2726 </para>
2727
2728 <para>
2729 <imagedata fileref="figures/oprofileui-copy-to-user.png" width="6in" depth="7in" align="center" scalefit="1" />
2730 </para>
2731
2732 <para>
2733 As another example, we can look at the busybox process and see
2734 that the progress meter made a system call:
2735 </para>
2736
2737 <para>
2738 <imagedata fileref="figures/oprofileui-busybox.png" width="6in" depth="7in" align="center" scalefit="1" />
2739 </para>
2740 </section>
2741
2742 <section id='oprofile-documentation'>
2743 <title>Documentation</title>
2744
2745 <para>
2746 Yocto already has some information on setting up and using
2747 OProfile and oprofileui. As this document doesn't cover
2748 everything in detail, it may be worth taking a look at the
2749 Yocto Project Development Manual
2750 </para>
2751
2752 <para>
2753 The OProfile manual can be found here:
2754 <ulink url='http://oprofile.sourceforge.net/doc/index.html'>OProfile manual</ulink>
2755 </para>
2756
2757 <para>
2758 The OProfile website contains links to the above manual and
2759 bunch of other items including an extensive set of examples:
2760 <ulink url='http://oprofile.sourceforge.net/about/'>About OProfile</ulink>
2761 </para>
2762 </section>
2763</section>
2764
2765<section id='profile-manual-sysprof'> 2231<section id='profile-manual-sysprof'>
2766 <title>Sysprof</title> 2232 <title>Sysprof</title>
2767 2233
diff --git a/documentation/ref-manual/ref-features.xml b/documentation/ref-manual/ref-features.xml
index 56e1185681..fd7693500b 100644
--- a/documentation/ref-manual/ref-features.xml
+++ b/documentation/ref-manual/ref-features.xml
@@ -308,6 +308,13 @@
308 <listitem><para><emphasis>nfs-server:</emphasis> 308 <listitem><para><emphasis>nfs-server:</emphasis>
309 Installs an NFS server. 309 Installs an NFS server.
310 </para></listitem> 310 </para></listitem>
311 <listitem><para><emphasis>perf:</emphasis>
312 Installs profiling tools such as
313 <filename>perf</filename>, <filename>systemtap</filename>,
314 and <filename>LTTng</filename>.
315 For general information on user-space tools, see the
316 <ulink url='&YOCTO_DOCS_SDK_URL;#sdk-manual'>Yocto Project Software Development Kit (SDK) Developer's Guide</ulink>.
317 </para></listitem>
311 <listitem><para><emphasis>ssh-server-dropbear:</emphasis> 318 <listitem><para><emphasis>ssh-server-dropbear:</emphasis>
312 Installs the Dropbear minimal SSH server. 319 Installs the Dropbear minimal SSH server.
313 </para></listitem> 320 </para></listitem>
@@ -328,13 +335,6 @@
328 For information on tracing and profiling, see the 335 For information on tracing and profiling, see the
329 <ulink url='&YOCTO_DOCS_PROF_URL;'>Yocto Project Profiling and Tracing Manual</ulink>. 336 <ulink url='&YOCTO_DOCS_PROF_URL;'>Yocto Project Profiling and Tracing Manual</ulink>.
330 </para></listitem> 337 </para></listitem>
331 <listitem><para><emphasis>tools-profile:</emphasis>
332 Installs profiling tools such as
333 <filename>oprofile</filename>, <filename>exmap</filename>,
334 and <filename>LTTng</filename>.
335 For general information on user-space tools, see the
336 <ulink url='&YOCTO_DOCS_SDK_URL;#sdk-manual'>Yocto Project Software Development Kit (SDK) Developer's Guide</ulink>.
337 </para></listitem>
338 <listitem><para><emphasis>tools-sdk:</emphasis> 338 <listitem><para><emphasis>tools-sdk:</emphasis>
339 Installs a full SDK that runs on the device. 339 Installs a full SDK that runs on the device.
340 </para></listitem> 340 </para></listitem>