1. 29 Jun, 2015 1 commit
    • Steven Rostedt's avatar
      tracing: Have filter check for balanced ops · 63dec311
      Steven Rostedt authored
      commit 2cf30dc180cea808077f003c5116388183e54f9e upstream.
      
      When the following filter is used it causes a warning to trigger:
      
       # cd /sys/kernel/debug/tracing
       # echo "((dev==1)blocks==2)" > events/ext4/ext4_truncate_exit/filter
      -bash: echo: write error: Invalid argument
       # cat events/ext4/ext4_truncate_exit/filter
      ((dev==1)blocks==2)
      ^
      parse_error: No error
      
       ------------[ cut here ]------------
       WARNING: CPU: 2 PID: 1223 at kernel/trace/trace_events_filter.c:1640 replace_preds+0x3c5/0x990()
       Modules linked in: bnep lockd grace bluetooth  ...
       CPU: 3 PID: 1223 Comm: bash Tainted: G        W       4.1.0-rc3-test+ #450
       Hardware name: Hewlett-Packard HP Compaq Pro 6300 SFF/339A, BIOS K01 v02.05 05/07/2012
        0000000000000668 ffff8800c106bc98 ffffffff816ed4f9 ffff88011ead0cf0
        0000000000000000 ffff8800c106bcd8 ffffffff8107fb07 ffffffff8136b46c
        ffff8800c7d81d48 ffff8800d4c2bc00 ffff8800d4d4f920 00000000ffffffea
       Call Trace:
        [<ffffffff816ed4f9>] dump_stack+0x4c/0x6e
        [<ffffffff8107fb07>] warn_slowpath_common+0x97/0xe0
        [<ffffffff8136b46c>] ? _kstrtoull+0x2c/0x80
        [<ffffffff8107fb6a>] warn_slowpath_null+0x1a/0x20
        [<ffffffff81159065>] replace_preds+0x3c5/0x990
        [<ffffffff811596b2>] create_filter+0x82/0xb0
        [<ffffffff81159944>] apply_event_filter+0xd4/0x180
        [<ffffffff81152bbf>] event_filter_write+0x8f/0x120
        [<ffffffff811db2a8>] __vfs_write+0x28/0xe0
        [<ffffffff811dda43>] ? __sb_start_write+0x53/0xf0
        [<ffffffff812e51e0>] ? security_file_permission+0x30/0xc0
        [<ffffffff811dc408>] vfs_write+0xb8/0x1b0
        [<ffffffff811dc72f>] SyS_write+0x4f/0xb0
        [<ffffffff816f5217>] system_call_fastpath+0x12/0x6a
       ---[ end trace e11028bd95818dcd ]---
      
      Worse yet, reading the error message (the filter again) it says that
      there was no error, when there clearly was. The issue is that the
      code that checks the input does not check for balanced ops. That is,
      having an op between a closed parenthesis and the next token.
      
      This would only cause a warning, and fail out before doing any real
      harm, but it should still not caues a warning, and the error reported
      should work:
      
       # cd /sys/kernel/debug/tracing
       # echo "((dev==1)blocks==2)" > events/ext4/ext4_truncate_exit/filter
      -bash: echo: write error: Invalid argument
       # cat events/ext4/ext4_truncate_exit/filter
      ((dev==1)blocks==2)
      ^
      parse_error: Meaningless filter expression
      
      And give no kernel warning.
      
      Link: http://lkml.kernel.org/r/20150615175025.7e809215@gandalf.local.home
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Reported-by: default avatarVince Weaver <vincent.weaver@maine.edu>
      Tested-by: default avatarVince Weaver <vincent.weaver@maine.edu>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      [ luis: backported to 3.16:
        - unconditionally decrement cnt as the OP_NOT logic was introduced only
          by e12c09cf3087 ("tracing: Add NOT to filtering logic") ]
      Signed-off-by: default avatarLuis Henriques <luis.henriques@canonical.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      63dec311
  2. 22 Jun, 2015 1 commit
  3. 06 May, 2015 1 commit
  4. 29 Apr, 2015 1 commit
  5. 06 Mar, 2015 1 commit
  6. 14 Nov, 2014 1 commit
    • Rabin Vincent's avatar
      tracing/syscalls: Ignore numbers outside NR_syscalls' range · 3ad3add7
      Rabin Vincent authored
      commit 086ba77a6db00ed858ff07451bedee197df868c9 upstream.
      
      ARM has some private syscalls (for example, set_tls(2)) which lie
      outside the range of NR_syscalls.  If any of these are called while
      syscall tracing is being performed, out-of-bounds array access will
      occur in the ftrace and perf sys_{enter,exit} handlers.
      
       # trace-cmd record -e raw_syscalls:* true && trace-cmd report
       ...
       true-653   [000]   384.675777: sys_enter:            NR 192 (0, 1000, 3, 4000022, ffffffff, 0)
       true-653   [000]   384.675812: sys_exit:             NR 192 = 1995915264
       true-653   [000]   384.675971: sys_enter:            NR 983045 (76f74480, 76f74000, 76f74b28, 76f74480, 76f76f74, 1)
       true-653   [000]   384.675988: sys_exit:             NR 983045 = 0
       ...
      
       # trace-cmd record -e syscalls:* true
       [   17.289329] Unable to handle kernel paging request at virtual address aaaaaace
       [   17.289590] pgd = 9e71c000
       [   17.289696] [aaaaaace] *pgd=00000000
       [   17.289985] Internal error: Oops: 5 [#1] PREEMPT SMP ARM
       [   17.290169] Modules linked in:
       [   17.290391] CPU: 0 PID: 704 Comm: true Not tainted 3.18.0-rc2+ #21
       [   17.290585] task: 9f4dab00 ti: 9e710000 task.ti: 9e710000
       [   17.290747] PC is at ftrace_syscall_enter+0x48/0x1f8
       [   17.290866] LR is at syscall_trace_enter+0x124/0x184
      
      Fix this by ignoring out-of-NR_syscalls-bounds syscall numbers.
      
      Commit cd0980fc "tracing: Check invalid syscall nr while tracing syscalls"
      added the check for less than zero, but it should have also checked
      for greater than NR_syscalls.
      
      Link: http://lkml.kernel.org/p/1414620418-29472-1-git-send-email-rabin@rab.in
      
      Fixes: cd0980fc "tracing: Check invalid syscall nr while tracing syscalls"
      Signed-off-by: default avatarRabin Vincent <rabin@rab.in>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      3ad3add7
  7. 09 Oct, 2014 1 commit
    • Steven Rostedt (Red Hat)'s avatar
      ring-buffer: Fix infinite spin in reading buffer · 78a3db11
      Steven Rostedt (Red Hat) authored
      commit 24607f114fd14f2f37e3e0cb3d47bce96e81e848 upstream.
      
      Commit 651e22f2701b "ring-buffer: Always reset iterator to reader page"
      fixed one bug but in the process caused another one. The reset is to
      update the header page, but that fix also changed the way the cached
      reads were updated. The cache reads are used to test if an iterator
      needs to be updated or not.
      
      A ring buffer iterator, when created, disables writes to the ring buffer
      but does not stop other readers or consuming reads from happening.
      Although all readers are synchronized via a lock, they are only
      synchronized when in the ring buffer functions. Those functions may
      be called by any number of readers. The iterator continues down when
      its not interrupted by a consuming reader. If a consuming read
      occurs, the iterator starts from the beginning of the buffer.
      
      The way the iterator sees that a consuming read has happened since
      its last read is by checking the reader "cache". The cache holds the
      last counts of the read and the reader page itself.
      
      Commit 651e22f2701b changed what was saved by the cache_read when
      the rb_iter_reset() occurred, making the iterator never match the cache.
      Then if the iterator calls rb_iter_reset(), it will go into an
      infinite loop by checking if the cache doesn't match, doing the reset
      and retrying, just to see that the cache still doesn't match! Which
      should never happen as the reset is suppose to set the cache to the
      current value and there's locks that keep a consuming reader from
      having access to the data.
      
      Fixes: 651e22f2701b "ring-buffer: Always reset iterator to reader page"
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      78a3db11
  8. 05 Oct, 2014 1 commit
  9. 17 Sep, 2014 2 commits
    • Steven Rostedt (Red Hat)'s avatar
      ring-buffer: Up rb_iter_peek() loop count to 3 · 7f70b62e
      Steven Rostedt (Red Hat) authored
      commit 021de3d904b88b1771a3a2cfc5b75023c391e646 upstream.
      
      After writting a test to try to trigger the bug that caused the
      ring buffer iterator to become corrupted, I hit another bug:
      
       WARNING: CPU: 1 PID: 5281 at kernel/trace/ring_buffer.c:3766 rb_iter_peek+0x113/0x238()
       Modules linked in: ipt_MASQUERADE sunrpc [...]
       CPU: 1 PID: 5281 Comm: grep Tainted: G        W     3.16.0-rc3-test+ #143
       Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./To be filled by O.E.M., BIOS SDBLI944.86P 05/08/2007
        0000000000000000 ffffffff81809a80 ffffffff81503fb0 0000000000000000
        ffffffff81040ca1 ffff8800796d6010 ffffffff810c138d ffff8800796d6010
        ffff880077438c80 ffff8800796d6010 ffff88007abbe600 0000000000000003
       Call Trace:
        [<ffffffff81503fb0>] ? dump_stack+0x4a/0x75
        [<ffffffff81040ca1>] ? warn_slowpath_common+0x7e/0x97
        [<ffffffff810c138d>] ? rb_iter_peek+0x113/0x238
        [<ffffffff810c138d>] ? rb_iter_peek+0x113/0x238
        [<ffffffff810c14df>] ? ring_buffer_iter_peek+0x2d/0x5c
        [<ffffffff810c6f73>] ? tracing_iter_reset+0x6e/0x96
        [<ffffffff810c74a3>] ? s_start+0xd7/0x17b
        [<ffffffff8112b13e>] ? kmem_cache_alloc_trace+0xda/0xea
        [<ffffffff8114cf94>] ? seq_read+0x148/0x361
        [<ffffffff81132d98>] ? vfs_read+0x93/0xf1
        [<ffffffff81132f1b>] ? SyS_read+0x60/0x8e
        [<ffffffff8150bf9f>] ? tracesys+0xdd/0xe2
      
      Debugging this bug, which triggers when the rb_iter_peek() loops too
      many times (more than 2 times), I discovered there's a case that can
      cause that function to legitimately loop 3 times!
      
      rb_iter_peek() is different than rb_buffer_peek() as the rb_buffer_peek()
      only deals with the reader page (it's for consuming reads). The
      rb_iter_peek() is for traversing the buffer without consuming it, and as
      such, it can loop for one more reason. That is, if we hit the end of
      the reader page or any page, it will go to the next page and try again.
      
      That is, we have this:
      
       1. iter->head > iter->head_page->page->commit
          (rb_inc_iter() which moves the iter to the next page)
          try again
      
       2. event = rb_iter_head_event()
          event->type_len == RINGBUF_TYPE_TIME_EXTEND
          rb_advance_iter()
          try again
      
       3. read the event.
      
      But we never get to 3, because the count is greater than 2 and we
      cause the WARNING and return NULL.
      
      Up the counter to 3.
      
      Fixes: 69d1b839 "ring-buffer: Bind time extend and data events together"
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      7f70b62e
    • Steven Rostedt (Red Hat)'s avatar
      ring-buffer: Always reset iterator to reader page · 814aa5ad
      Steven Rostedt (Red Hat) authored
      commit 651e22f2701b4113989237c3048d17337dd2185c upstream.
      
      When performing a consuming read, the ring buffer swaps out a
      page from the ring buffer with a empty page and this page that
      was swapped out becomes the new reader page. The reader page
      is owned by the reader and since it was swapped out of the ring
      buffer, writers do not have access to it (there's an exception
      to that rule, but it's out of scope for this commit).
      
      When reading the "trace" file, it is a non consuming read, which
      means that the data in the ring buffer will not be modified.
      When the trace file is opened, a ring buffer iterator is allocated
      and writes to the ring buffer are disabled, such that the iterator
      will not have issues iterating over the data.
      
      Although the ring buffer disabled writes, it does not disable other
      reads, or even consuming reads. If a consuming read happens, then
      the iterator is reset and starts reading from the beginning again.
      
      My tests would sometimes trigger this bug on my i386 box:
      
      WARNING: CPU: 0 PID: 5175 at kernel/trace/trace.c:1527 __trace_find_cmdline+0x66/0xaa()
      Modules linked in:
      CPU: 0 PID: 5175 Comm: grep Not tainted 3.16.0-rc3-test+ #8
      Hardware name:                  /DG965MQ, BIOS MQ96510J.86A.0372.2006.0605.1717 06/05/2006
       00000000 00000000 f09c9e1c c18796b3 c1b5d74c f09c9e4c c103a0e3 c1b5154b
       f09c9e78 00001437 c1b5d74c 000005f7 c10bd85a c10bd85a c1cac57c f09c9eb0
       ed0e0000 f09c9e64 c103a185 00000009 f09c9e5c c1b5154b f09c9e78 f09c9e80^M
      Call Trace:
       [<c18796b3>] dump_stack+0x4b/0x75
       [<c103a0e3>] warn_slowpath_common+0x7e/0x95
       [<c10bd85a>] ? __trace_find_cmdline+0x66/0xaa
       [<c10bd85a>] ? __trace_find_cmdline+0x66/0xaa
       [<c103a185>] warn_slowpath_fmt+0x33/0x35
       [<c10bd85a>] __trace_find_cmdline+0x66/0xaa^M
       [<c10bed04>] trace_find_cmdline+0x40/0x64
       [<c10c3c16>] trace_print_context+0x27/0xec
       [<c10c4360>] ? trace_seq_printf+0x37/0x5b
       [<c10c0b15>] print_trace_line+0x319/0x39b
       [<c10ba3fb>] ? ring_buffer_read+0x47/0x50
       [<c10c13b1>] s_show+0x192/0x1ab
       [<c10bfd9a>] ? s_next+0x5a/0x7c
       [<c112e76e>] seq_read+0x267/0x34c
       [<c1115a25>] vfs_read+0x8c/0xef
       [<c112e507>] ? seq_lseek+0x154/0x154
       [<c1115ba2>] SyS_read+0x54/0x7f
       [<c188488e>] syscall_call+0x7/0xb
      ---[ end trace 3f507febd6b4cc83 ]---
      >>>> ##### CPU 1 buffer started ####
      
      Which was the __trace_find_cmdline() function complaining about the pid
      in the event record being negative.
      
      After adding more test cases, this would trigger more often. Strangely
      enough, it would never trigger on a single test, but instead would trigger
      only when running all the tests. I believe that was the case because it
      required one of the tests to be shutting down via delayed instances while
      a new test started up.
      
      After spending several days debugging this, I found that it was caused by
      the iterator becoming corrupted. Debugging further, I found out why
      the iterator became corrupted. It happened with the rb_iter_reset().
      
      As consuming reads may not read the full reader page, and only part
      of it, there's a "read" field to know where the last read took place.
      The iterator, must also start at the read position. In the rb_iter_reset()
      code, if the reader page was disconnected from the ring buffer, the iterator
      would start at the head page within the ring buffer (where writes still
      happen). But the mistake there was that it still used the "read" field
      to start the iterator on the head page, where it should always start
      at zero because readers never read from within the ring buffer where
      writes occur.
      
      I originally wrote a patch to have it set the iter->head to 0 instead
      of iter->head_page->read, but then I questioned why it wasn't always
      setting the iter to point to the reader page, as the reader page is
      still valid.  The list_empty(reader_page->list) just means that it was
      successful in swapping out. But the reader_page may still have data.
      
      There was a bug report a long time ago that was not reproducible that
      had something about trace_pipe (consuming read) not matching trace
      (iterator read). This may explain why that happened.
      
      Anyway, the correct answer to this bug is to always use the reader page
      an not reset the iterator to inside the writable ring buffer.
      
      Fixes: d769041f "ring_buffer: implement new locking"
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      814aa5ad
  10. 31 Jul, 2014 1 commit
  11. 28 Jul, 2014 3 commits
    • Martin Lau's avatar
      ring-buffer: Fix polling on trace_pipe · 16de9ea3
      Martin Lau authored
      commit 97b8ee845393701edc06e27ccec2876ff9596019 upstream.
      
      ring_buffer_poll_wait() should always put the poll_table to its wait_queue
      even there is immediate data available.  Otherwise, the following epoll and
      read sequence will eventually hang forever:
      
      1. Put some data to make the trace_pipe ring_buffer read ready first
      2. epoll_ctl(efd, EPOLL_CTL_ADD, trace_pipe_fd, ee)
      3. epoll_wait()
      4. read(trace_pipe_fd) till EAGAIN
      5. Add some more data to the trace_pipe ring_buffer
      6. epoll_wait() -> this epoll_wait() will block forever
      
      ~ During the epoll_ctl(efd, EPOLL_CTL_ADD,...) call in step 2,
        ring_buffer_poll_wait() returns immediately without adding poll_table,
        which has poll_table->_qproc pointing to ep_poll_callback(), to its
        wait_queue.
      ~ During the epoll_wait() call in step 3 and step 6,
        ring_buffer_poll_wait() cannot add ep_poll_callback() to its wait_queue
        because the poll_table->_qproc is NULL and it is how epoll works.
      ~ When there is new data available in step 6, ring_buffer does not know
        it has to call ep_poll_callback() because it is not in its wait queue.
        Hence, block forever.
      
      Other poll implementation seems to call poll_wait() unconditionally as the very
      first thing to do.  For example, tcp_poll() in tcp.c.
      
      Link: http://lkml.kernel.org/p/20140610060637.GA14045@devbig242.prn2.facebook.com
      
      Fixes: 2a2cc8f7 "ftrace: allow the event pipe to be polled"
      Reviewed-by: default avatarChris Mason <clm@fb.com>
      Signed-off-by: default avatarMartin Lau <kafai@fb.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      16de9ea3
    • zhangwei(Jovi)'s avatar
      tracing: Add ftrace_trace_stack into __trace_puts/__trace_bputs · e250100b
      zhangwei(Jovi) authored
      commit 8abfb8727f4a724d31f9ccfd8013fbd16d539445 upstream.
      
      Currently trace option stacktrace is not applicable for
      trace_printk with constant string argument, the reason is
      in __trace_puts/__trace_bputs ftrace_trace_stack is missing.
      
      In contrast, when using trace_printk with non constant string
      argument(will call into __trace_printk/__trace_bprintk), then
      trace option stacktrace is workable, this inconstant result
      will confuses users a lot.
      
      Link: http://lkml.kernel.org/p/51E7A7C9.9040401@huawei.comSigned-off-by: default avatarzhangwei(Jovi) <jovi.zhangwei@huawei.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      e250100b
    • Steven Rostedt (Red Hat)'s avatar
      tracing: Fix graph tracer with stack tracer on other archs · 9b87c4e5
      Steven Rostedt (Red Hat) authored
      commit 5f8bf2d263a20b986225ae1ed7d6759dc4b93af9 upstream.
      
      Running my ftrace tests on PowerPC, it failed the test that checks
      if function_graph tracer is affected by the stack tracer. It was.
      Looking into this, I found that the update_function_graph_func()
      must be called even if the trampoline function is not changed.
      This is because archs like PowerPC do not support ftrace_ops being
      passed by assembly and instead uses a helper function (what the
      trampoline function points to). Since this function is not changed
      even when multiple ftrace_ops are added to the code, the test that
      falls out before calling update_function_graph_func() will miss that
      the update must still be done.
      
      Call update_function_graph_function() for all calls to
      update_ftrace_function()
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      9b87c4e5
  12. 17 Jul, 2014 1 commit
  13. 09 Jul, 2014 1 commit
  14. 07 Jul, 2014 1 commit
    • Steven Rostedt (Red Hat)'s avatar
      tracing: Try again for saved cmdline if failed due to locking · c5bce736
      Steven Rostedt (Red Hat) authored
      commit 379cfdac37923653c9d4242d10052378b7563005 upstream.
      
      In order to prevent the saved cmdline cache from being filled when
      tracing is not active, the comms are only recorded after a trace event
      is recorded.
      
      The problem is, a comm can fail to be recorded if the trace_cmdline_lock
      is held. That lock is taken via a trylock to allow it to happen from
      any context (including NMI). If the lock fails to be taken, the comm
      is skipped. No big deal, as we will try again later.
      
      But! Because of the code that was added to only record after an event,
      we may not try again later as the recording is made as a oneshot per
      event per CPU.
      
      Only disable the recording of the comm if the comm is actually recorded.
      
      Fixes: 7ffbd48d "tracing: Cache comms only after an event occurred"
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      c5bce736
  15. 07 Jun, 2014 2 commits
    • Steven Rostedt (Red Hat)'s avatar
      ftrace/module: Hardcode ftrace_module_init() call into load_module() · 7d54b5cd
      Steven Rostedt (Red Hat) authored
      commit a949ae560a511fe4e3adf48fa44fefded93e5c2b upstream.
      
      A race exists between module loading and enabling of function tracer.
      
      	CPU 1				CPU 2
      	-----				-----
        load_module()
         module->state = MODULE_STATE_COMING
      
      				register_ftrace_function()
      				 mutex_lock(&ftrace_lock);
      				 ftrace_startup()
      				  update_ftrace_function();
      				   ftrace_arch_code_modify_prepare()
      				    set_all_module_text_rw();
      				   <enables-ftrace>
      				    ftrace_arch_code_modify_post_process()
      				     set_all_module_text_ro();
      
      				[ here all module text is set to RO,
      				  including the module that is
      				  loading!! ]
      
         blocking_notifier_call_chain(MODULE_STATE_COMING);
          ftrace_init_module()
      
           [ tries to modify code, but it's RO, and fails!
             ftrace_bug() is called]
      
      When this race happens, ftrace_bug() will produces a nasty warning and
      all of the function tracing features will be disabled until reboot.
      
      The simple solution is to treate module load the same way the core
      kernel is treated at boot. To hardcode the ftrace function modification
      of converting calls to mcount into nops. This is done in init/main.c
      there's no reason it could not be done in load_module(). This gives
      a better control of the changes and doesn't tie the state of the
      module to its notifiers as much. Ftrace is special, it needs to be
      treated as such.
      
      The reason this would work, is that the ftrace_module_init() would be
      called while the module is in MODULE_STATE_UNFORMED, which is ignored
      by the set_all_module_text_ro() call.
      
      Link: http://lkml.kernel.org/r/1395637826-3312-1-git-send-email-indou.takao@jp.fujitsu.comReported-by: default avatarTakao Indoh <indou.takao@jp.fujitsu.com>
      Acked-by: default avatarRusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      7d54b5cd
    • Mauro Ribeiro's avatar
      Samsung sources drop · 448817bd
      Mauro Ribeiro authored
      448817bd
  16. 31 May, 2014 1 commit
    • Roman Pen's avatar
      blktrace: fix accounting of partially completed requests · e9d93394
      Roman Pen authored
      commit af5040da01ef980670b3741b3e10733ee3e33566 upstream.
      
      trace_block_rq_complete does not take into account that request can
      be partially completed, so we can get the following incorrect output
      of blkparser:
      
        C   R 232 + 240 [0]
        C   R 240 + 232 [0]
        C   R 248 + 224 [0]
        C   R 256 + 216 [0]
      
      but should be:
      
        C   R 232 + 8 [0]
        C   R 240 + 8 [0]
        C   R 248 + 8 [0]
        C   R 256 + 8 [0]
      
      Also, the whole output summary statistics of completed requests and
      final throughput will be incorrect.
      
      This patch takes into account real completion size of the request and
      fixes wrong completion accounting.
      Signed-off-by: default avatarRoman Pen <r.peniaev@gmail.com>
      CC: Steven Rostedt <rostedt@goodmis.org>
      CC: Frederic Weisbecker <fweisbec@gmail.com>
      CC: Ingo Molnar <mingo@redhat.com>
      CC: linux-kernel@vger.kernel.org
      Signed-off-by: default avatarJens Axboe <axboe@fb.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      e9d93394
  17. 31 Mar, 2014 1 commit
    • Vaibhav Nagarnaik's avatar
      tracing: Fix array size mismatch in format string · a1c10a94
      Vaibhav Nagarnaik authored
      commit 87291347c49dc40aa339f587b209618201c2e527 upstream.
      
      In event format strings, the array size is reported in two locations.
      One in array subscript and then via the "size:" attribute. The values
      reported there have a mismatch.
      
      For e.g., in sched:sched_switch the prev_comm and next_comm character
      arrays have subscript values as [32] where as the actual field size is
      16.
      
      name: sched_switch
      ID: 301
      format:
              field:unsigned short common_type;       offset:0;       size:2; signed:0;
              field:unsigned char common_flags;       offset:2;       size:1; signed:0;
              field:unsigned char common_preempt_count;       offset:3;       size:1;signed:0;
              field:int common_pid;   offset:4;       size:4; signed:1;
      
              field:char prev_comm[32];       offset:8;       size:16;        signed:1;
              field:pid_t prev_pid;   offset:24;      size:4; signed:1;
              field:int prev_prio;    offset:28;      size:4; signed:1;
              field:long prev_state;  offset:32;      size:8; signed:1;
              field:char next_comm[32];       offset:40;      size:16;        signed:1;
              field:pid_t next_pid;   offset:56;      size:4; signed:1;
              field:int next_prio;    offset:60;      size:4; signed:1;
      
      After bisection, the following commit was blamed:
      92edca07 tracing: Use direct field, type and system names
      
      This commit removes the duplication of strings for field->name and
      field->type assuming that all the strings passed in
      __trace_define_field() are immutable. This is not true for arrays, where
      the type string is created in event_storage variable and field->type for
      all array fields points to event_storage.
      
      Use __stringify() to create a string constant for the type string.
      
      Also, get rid of event_storage and event_storage_mutex that are not
      needed anymore.
      
      also, an added benefit is that this reduces the overhead of events a bit more:
      
         text    data     bss     dec     hex filename
      8424787 2036472 1302528 11763787         b3804b vmlinux
      8420814 2036408 1302528 11759750         b37086 vmlinux.patched
      
      Link: http://lkml.kernel.org/r/1392349908-29685-1-git-send-email-vnagarnaik@google.com
      
      Cc: Laurent Chavey <chavey@google.com>
      Signed-off-by: default avatarVaibhav Nagarnaik <vnagarnaik@google.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      a1c10a94
  18. 24 Mar, 2014 1 commit
  19. 22 Feb, 2014 1 commit
    • Steven Rostedt (Red Hat)'s avatar
      ring-buffer: Fix first commit on sub-buffer having non-zero delta · f0bf1b24
      Steven Rostedt (Red Hat) authored
      commit d651aa1d68a2f0a7ee65697b04c6a92f8c0a12f2 upstream.
      
      Each sub-buffer (buffer page) has a full 64 bit timestamp. The events on
      that page use a 27 bit delta against that timestamp in order to save on
      bits written to the ring buffer. If the time between events is larger than
      what the 27 bits can hold, a "time extend" event is added to hold the
      entire 64 bit timestamp again and the events after that hold a delta from
      that timestamp.
      
      As a "time extend" is always paired with an event, it is logical to just
      allocate the event with the time extend, to make things a bit more efficient.
      
      Unfortunately, when the pairing code was written, it removed the "delta = 0"
      from the first commit on a page, causing the events on the page to be
      slightly skewed.
      
      Fixes: 69d1b839 "ring-buffer: Bind time extend and data events together"
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      f0bf1b24
  20. 13 Feb, 2014 5 commits
    • Steven Rostedt's avatar
      ftrace: Have function graph only trace based on global_ops filters · 1499a3eb
      Steven Rostedt authored
      commit 23a8e8441a0a74dd612edf81dc89d1600bc0a3d1 upstream.
      
      Doing some different tests, I discovered that function graph tracing, when
      filtered via the set_ftrace_filter and set_ftrace_notrace files, does
      not always keep with them if another function ftrace_ops is registered
      to trace functions.
      
      The reason is that function graph just happens to trace all functions
      that the function tracer enables. When there was only one user of
      function tracing, the function graph tracer did not need to worry about
      being called by functions that it did not want to trace. But now that there
      are other users, this becomes a problem.
      
      For example, one just needs to do the following:
      
       # cd /sys/kernel/debug/tracing
       # echo schedule > set_ftrace_filter
       # echo function_graph > current_tracer
       # cat trace
      [..]
       0)               |  schedule() {
       ------------------------------------------
       0)    <idle>-0    =>   rcu_pre-7
       ------------------------------------------
      
       0) ! 2980.314 us |  }
       0)               |  schedule() {
       ------------------------------------------
       0)   rcu_pre-7    =>    <idle>-0
       ------------------------------------------
      
       0) + 20.701 us   |  }
      
       # echo 1 > /proc/sys/kernel/stack_tracer_enabled
       # cat trace
      [..]
       1) + 20.825 us   |      }
       1) + 21.651 us   |    }
       1) + 30.924 us   |  } /* SyS_ioctl */
       1)               |  do_page_fault() {
       1)               |    __do_page_fault() {
       1)   0.274 us    |      down_read_trylock();
       1)   0.098 us    |      find_vma();
       1)               |      handle_mm_fault() {
       1)               |        _raw_spin_lock() {
       1)   0.102 us    |          preempt_count_add();
       1)   0.097 us    |          do_raw_spin_lock();
       1)   2.173 us    |        }
       1)               |        do_wp_page() {
       1)   0.079 us    |          vm_normal_page();
       1)   0.086 us    |          reuse_swap_page();
       1)   0.076 us    |          page_move_anon_rmap();
       1)               |          unlock_page() {
       1)   0.082 us    |            page_waitqueue();
       1)   0.086 us    |            __wake_up_bit();
       1)   1.801 us    |          }
       1)   0.075 us    |          ptep_set_access_flags();
       1)               |          _raw_spin_unlock() {
       1)   0.098 us    |            do_raw_spin_unlock();
       1)   0.105 us    |            preempt_count_sub();
       1)   1.884 us    |          }
       1)   9.149 us    |        }
       1) + 13.083 us   |      }
       1)   0.146 us    |      up_read();
      
      When the stack tracer was enabled, it enabled all functions to be traced, which
      now the function graph tracer also traces. This is a side effect that should
      not occur.
      
      To fix this a test is added when the function tracing is changed, as well as when
      the graph tracer is enabled, to see if anything other than the ftrace global_ops
      function tracer is enabled. If so, then the graph tracer calls a test trampoline
      that will look at the function that is being traced and compare it with the
      filters defined by the global_ops.
      
      As an optimization, if there's no other function tracers registered, or if
      the only registered function tracers also use the global ops, the function
      graph infrastructure will call the registered function graph callback directly
      and not go through the test trampoline.
      
      Fixes: d2d45c7a "tracing: Have stack_tracer use a separate list of functions"
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      1499a3eb
    • Steven Rostedt's avatar
      ftrace: Fix synchronization location disabling and freeing ftrace_ops · b6c5a8d3
      Steven Rostedt authored
      commit a4c35ed241129dd142be4cadb1e5a474a56d5464 upstream.
      
      The synchronization needed after ftrace_ops are unregistered must happen
      after the callback is disabled from becing called by functions.
      
      The current location happens after the function is being removed from the
      internal lists, but not after the function callbacks were disabled, leaving
      the functions susceptible of being called after their callbacks are freed.
      
      This affects perf and any externel users of function tracing (LTTng and
      SystemTap).
      
      Fixes: cdbe61bf "ftrace: Allow dynamically allocated function tracers"
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      b6c5a8d3
    • Steven Rostedt's avatar
      ftrace: Synchronize setting function_trace_op with ftrace_trace_function · a0d0a2a5
      Steven Rostedt authored
      commit 405e1d834807e51b2ebd3dea81cb51e53fb61504 upstream.
      
      ftrace_trace_function is a variable that holds what function will be called
      directly by the assembly code (mcount). If just a single function is
      registered and it handles recursion itself, then the assembly will call that
      function directly without any helper function. It also passes in the
      ftrace_op that was registered with the callback. The ftrace_op to send is
      stored in the function_trace_op variable.
      
      The ftrace_trace_function and function_trace_op needs to be coordinated such
      that the called callback wont be called with the wrong ftrace_op, otherwise
      bad things can happen if it expected a different op. Luckily, there's no
      callback that doesn't use the helper functions that requires this. But
      there soon will be and this needs to be fixed.
      
      Use a set_function_trace_op to store the ftrace_op to set the
      function_trace_op to when it is safe to do so (during the update function
      within the breakpoint or stop machine calls). Or if dynamic ftrace is not
      being used (static tracing) then we have to do a bit more synchronization
      when the ftrace_trace_function is set as that takes affect immediately
      (as oppose to dynamic ftrace doing it with the modification of the trampoline).
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      a0d0a2a5
    • Steven Rostedt (Red Hat)'s avatar
      tracing: Check if tracing is enabled in trace_puts() · f74bb740
      Steven Rostedt (Red Hat) authored
      commit 3132e107d608f8753240d82d61303c500fd515b4 upstream.
      
      If trace_puts() is used very early in boot up, it can crash the machine
      if it is called before the ring buffer is allocated. If a trace_printk()
      is used with no arguments, then it will be converted into a trace_puts()
      and suffer the same fate.
      
      Fixes: 09ae7234 "tracing: Add trace_puts() for even faster trace_printk() tracing"
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      f74bb740
    • Steven Rostedt (Red Hat)'s avatar
      tracing: Have trace buffer point back to trace_array · fb23eaf4
      Steven Rostedt (Red Hat) authored
      commit dced341b2d4f06668efaab33f88de5d287c0f45b upstream.
      
      The trace buffer has a descriptor pointer that goes back to the trace
      array. But it was never assigned. Luckily, nothing uses it (yet), but
      it will in the future.
      
      Although nothing currently uses this, if any of the new features get
      backported to older kernels, and because this is such a simple change,
      I'm marking it for stable too.
      
      Fixes: 12883efb "tracing: Consolidate max_tr into main trace_array structure"
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      fb23eaf4
  21. 09 Jan, 2014 1 commit
    • Miao Xie's avatar
      ftrace: Initialize the ftrace profiler for each possible cpu · 885154ce
      Miao Xie authored
      commit c4602c1c818bd6626178d6d3fcc152d9f2f48ac0 upstream.
      
      Ftrace currently initializes only the online CPUs. This implementation has
      two problems:
      - If we online a CPU after we enable the function profile, and then run the
        test, we will lose the trace information on that CPU.
        Steps to reproduce:
        # echo 0 > /sys/devices/system/cpu/cpu1/online
        # cd <debugfs>/tracing/
        # echo <some function name> >> set_ftrace_filter
        # echo 1 > function_profile_enabled
        # echo 1 > /sys/devices/system/cpu/cpu1/online
        # run test
      - If we offline a CPU before we enable the function profile, we will not clear
        the trace information when we enable the function profile. It will trouble
        the users.
        Steps to reproduce:
        # cd <debugfs>/tracing/
        # echo <some function name> >> set_ftrace_filter
        # echo 1 > function_profile_enabled
        # run test
        # cat trace_stat/function*
        # echo 0 > /sys/devices/system/cpu/cpu1/online
        # echo 0 > function_profile_enabled
        # echo 1 > function_profile_enabled
        # cat trace_stat/function*
        # run test
        # cat trace_stat/function*
      
      So it is better that we initialize the ftrace profiler for each possible cpu
      every time we enable the function profile instead of just the online ones.
      
      Link: http://lkml.kernel.org/r/1387178401-10619-1-git-send-email-miaox@cn.fujitsu.comSigned-off-by: default avatarMiao Xie <miaox@cn.fujitsu.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      885154ce
  22. 04 Dec, 2013 1 commit
    • Steven Rostedt (Red Hat)'s avatar
      ftrace: Fix function graph with loading of modules · 2940c25b
      Steven Rostedt (Red Hat) authored
      commit 8a56d7761d2d041ae5e8215d20b4167d8aa93f51 upstream.
      
      Commit 8c4f3c3fa9681 "ftrace: Check module functions being traced on reload"
      fixed module loading and unloading with respect to function tracing, but
      it missed the function graph tracer. If you perform the following
      
       # cd /sys/kernel/debug/tracing
       # echo function_graph > current_tracer
       # modprobe nfsd
       # echo nop > current_tracer
      
      You'll get the following oops message:
      
       ------------[ cut here ]------------
       WARNING: CPU: 2 PID: 2910 at /linux.git/kernel/trace/ftrace.c:1640 __ftrace_hash_rec_update.part.35+0x168/0x1b9()
       Modules linked in: nfsd exportfs nfs_acl lockd ipt_MASQUERADE sunrpc ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 ip6table_filter ip6_tables uinput snd_hda_codec_idt
       CPU: 2 PID: 2910 Comm: bash Not tainted 3.13.0-rc1-test #7
       Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./To be filled by O.E.M., BIOS SDBLI944.86P 05/08/2007
        0000000000000668 ffff8800787efcf8 ffffffff814fe193 ffff88007d500000
        0000000000000000 ffff8800787efd38 ffffffff8103b80a 0000000000000668
        ffffffff810b2b9a ffffffff81a48370 0000000000000001 ffff880037aea000
       Call Trace:
        [<ffffffff814fe193>] dump_stack+0x4f/0x7c
        [<ffffffff8103b80a>] warn_slowpath_common+0x81/0x9b
        [<ffffffff810b2b9a>] ? __ftrace_hash_rec_update.part.35+0x168/0x1b9
        [<ffffffff8103b83e>] warn_slowpath_null+0x1a/0x1c
        [<ffffffff810b2b9a>] __ftrace_hash_rec_update.part.35+0x168/0x1b9
        [<ffffffff81502f89>] ? __mutex_lock_slowpath+0x364/0x364
        [<ffffffff810b2cc2>] ftrace_shutdown+0xd7/0x12b
        [<ffffffff810b47f0>] unregister_ftrace_graph+0x49/0x78
        [<ffffffff810c4b30>] graph_trace_reset+0xe/0x10
        [<ffffffff810bf393>] tracing_set_tracer+0xa7/0x26a
        [<ffffffff810bf5e1>] tracing_set_trace_write+0x8b/0xbd
        [<ffffffff810c501c>] ? ftrace_return_to_handler+0xb2/0xde
        [<ffffffff811240a8>] ? __sb_end_write+0x5e/0x5e
        [<ffffffff81122aed>] vfs_write+0xab/0xf6
        [<ffffffff8150a185>] ftrace_graph_caller+0x85/0x85
        [<ffffffff81122dbd>] SyS_write+0x59/0x82
        [<ffffffff8150a185>] ftrace_graph_caller+0x85/0x85
        [<ffffffff8150a2d2>] system_call_fastpath+0x16/0x1b
       ---[ end trace 940358030751eafb ]---
      
      The above mentioned commit didn't go far enough. Well, it covered the
      function tracer by adding checks in __register_ftrace_function(). The
      problem is that the function graph tracer circumvents that (for a slight
      efficiency gain when function graph trace is running with a function
      tracer. The gain was not worth this).
      
      The problem came with ftrace_startup() which should always be called after
      __register_ftrace_function(), if you want this bug to be completely fixed.
      
      Anyway, this solution moves __register_ftrace_function() inside of
      ftrace_startup() and removes the need to call them both.
      Reported-by: default avatarDave Wysochanski <dwysocha@redhat.com>
      Fixes: ed926f9b ("ftrace: Use counters to enable functions to trace")
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      2940c25b
  23. 29 Nov, 2013 1 commit
  24. 20 Nov, 2013 1 commit
    • Steven Rostedt's avatar
      tracing: Fix potential out-of-bounds in trace_get_user() · 84300636
      Steven Rostedt authored
      commit 057db8488b53d5e4faa0cedb2f39d4ae75dfbdbb upstream.
      
      Andrey reported the following report:
      
      ERROR: AddressSanitizer: heap-buffer-overflow on address ffff8800359c99f3
      ffff8800359c99f3 is located 0 bytes to the right of 243-byte region [ffff8800359c9900, ffff8800359c99f3)
      Accessed by thread T13003:
        #0 ffffffff810dd2da (asan_report_error+0x32a/0x440)
        #1 ffffffff810dc6b0 (asan_check_region+0x30/0x40)
        #2 ffffffff810dd4d3 (__tsan_write1+0x13/0x20)
        #3 ffffffff811cd19e (ftrace_regex_release+0x1be/0x260)
        #4 ffffffff812a1065 (__fput+0x155/0x360)
        #5 ffffffff812a12de (____fput+0x1e/0x30)
        #6 ffffffff8111708d (task_work_run+0x10d/0x140)
        #7 ffffffff810ea043 (do_exit+0x433/0x11f0)
        #8 ffffffff810eaee4 (do_group_exit+0x84/0x130)
        #9 ffffffff810eafb1 (SyS_exit_group+0x21/0x30)
        #10 ffffffff81928782 (system_call_fastpath+0x16/0x1b)
      
      Allocated by thread T5167:
        #0 ffffffff810dc778 (asan_slab_alloc+0x48/0xc0)
        #1 ffffffff8128337c (__kmalloc+0xbc/0x500)
        #2 ffffffff811d9d54 (trace_parser_get_init+0x34/0x90)
        #3 ffffffff811cd7b3 (ftrace_regex_open+0x83/0x2e0)
        #4 ffffffff811cda7d (ftrace_filter_open+0x2d/0x40)
        #5 ffffffff8129b4ff (do_dentry_open+0x32f/0x430)
        #6 ffffffff8129b668 (finish_open+0x68/0xa0)
        #7 ffffffff812b66ac (do_last+0xb8c/0x1710)
        #8 ffffffff812b7350 (path_openat+0x120/0xb50)
        #9 ffffffff812b8884 (do_filp_open+0x54/0xb0)
        #10 ffffffff8129d36c (do_sys_open+0x1ac/0x2c0)
        #11 ffffffff8129d4b7 (SyS_open+0x37/0x50)
        #12 ffffffff81928782 (system_call_fastpath+0x16/0x1b)
      
      Shadow bytes around the buggy address:
        ffff8800359c9700: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
        ffff8800359c9780: fd fd fd fd fd fd fd fd fa fa fa fa fa fa fa fa
        ffff8800359c9800: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
        ffff8800359c9880: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
        ffff8800359c9900: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      =>ffff8800359c9980: 00 00 00 00 00 00 00 00 00 00 00 00 00 00[03]fb
        ffff8800359c9a00: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
        ffff8800359c9a80: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
        ffff8800359c9b00: fa fa fa fa fa fa fa fa 00 00 00 00 00 00 00 00
        ffff8800359c9b80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
        ffff8800359c9c00: 00 00 00 00 00 00 00 00 fa fa fa fa fa fa fa fa
      Shadow byte legend (one shadow byte represents 8 application bytes):
        Addressable:           00
        Partially addressable: 01 02 03 04 05 06 07
        Heap redzone:          fa
        Heap kmalloc redzone:  fb
        Freed heap region:     fd
        Shadow gap:            fe
      
      The out-of-bounds access happens on 'parser->buffer[parser->idx] = 0;'
      
      Although the crash happened in ftrace_regex_open() the real bug
      occurred in trace_get_user() where there's an incrementation to
      parser->idx without a check against the size. The way it is triggered
      is if userspace sends in 128 characters (EVENT_BUF_SIZE + 1), the loop
      that reads the last character stores it and then breaks out because
      there is no more characters. Then the last character is read to determine
      what to do next, and the index is incremented without checking size.
      
      Then the caller of trace_get_user() usually nulls out the last character
      with a zero, but since the index is equal to the size, it writes a nul
      character after the allocated space, which can corrupt memory.
      
      Luckily, only root user has write access to this file.
      
      Link: http://lkml.kernel.org/r/20131009222323.04fd1a0d@gandalf.local.homeReported-by: default avatarAndrey Konovalov <andreyknvl@google.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      84300636
  25. 29 Aug, 2013 8 commits
    • Steven Rostedt (Red Hat)'s avatar
      ftrace: Check module functions being traced on reload · f05e999f
      Steven Rostedt (Red Hat) authored
      commit 8c4f3c3fa9681dc549cd35419b259496082fef8b upstream.
      
      There's been a nasty bug that would show up and not give much info.
      The bug displayed the following warning:
      
       WARNING: at kernel/trace/ftrace.c:1529 __ftrace_hash_rec_update+0x1e3/0x230()
       Pid: 20903, comm: bash Tainted: G           O 3.6.11+ #38405.trunk
       Call Trace:
        [<ffffffff8103e5ff>] warn_slowpath_common+0x7f/0xc0
        [<ffffffff8103e65a>] warn_slowpath_null+0x1a/0x20
        [<ffffffff810c2ee3>] __ftrace_hash_rec_update+0x1e3/0x230
        [<ffffffff810c4f28>] ftrace_hash_move+0x28/0x1d0
        [<ffffffff811401cc>] ? kfree+0x2c/0x110
        [<ffffffff810c68ee>] ftrace_regex_release+0x8e/0x150
        [<ffffffff81149f1e>] __fput+0xae/0x220
        [<ffffffff8114a09e>] ____fput+0xe/0x10
        [<ffffffff8105fa22>] task_work_run+0x72/0x90
        [<ffffffff810028ec>] do_notify_resume+0x6c/0xc0
        [<ffffffff8126596e>] ? trace_hardirqs_on_thunk+0x3a/0x3c
        [<ffffffff815c0f88>] int_signal+0x12/0x17
       ---[ end trace 793179526ee09b2c ]---
      
      It was finally narrowed down to unloading a module that was being traced.
      
      It was actually more than that. When functions are being traced, there's
      a table of all functions that have a ref count of the number of active
      tracers attached to that function. When a function trace callback is
      registered to a function, the function's record ref count is incremented.
      When it is unregistered, the function's record ref count is decremented.
      If an inconsistency is detected (ref count goes below zero) the above
      warning is shown and the function tracing is permanently disabled until
      reboot.
      
      The ftrace callback ops holds a hash of functions that it filters on
      (and/or filters off). If the hash is empty, the default means to filter
      all functions (for the filter_hash) or to disable no functions (for the
      notrace_hash).
      
      When a module is unloaded, it frees the function records that represent
      the module functions. These records exist on their own pages, that is
      function records for one module will not exist on the same page as
      function records for other modules or even the core kernel.
      
      Now when a module unloads, the records that represents its functions are
      freed. When the module is loaded again, the records are recreated with
      a default ref count of zero (unless there's a callback that traces all
      functions, then they will also be traced, and the ref count will be
      incremented).
      
      The problem is that if an ftrace callback hash includes functions of the
      module being unloaded, those hash entries will not be removed. If the
      module is reloaded in the same location, the hash entries still point
      to the functions of the module but the module's ref counts do not reflect
      that.
      
      With the help of Steve and Joern, we found a reproducer:
      
       Using uinput module and uinput_release function.
      
       cd /sys/kernel/debug/tracing
       modprobe uinput
       echo uinput_release > set_ftrace_filter
       echo function > current_tracer
       rmmod uinput
       modprobe uinput
       # check /proc/modules to see if loaded in same addr, otherwise try again
       echo nop > current_tracer
      
       [BOOM]
      
      The above loads the uinput module, which creates a table of functions that
      can be traced within the module.
      
      We add uinput_release to the filter_hash to trace just that function.
      
      Enable function tracincg, which increments the ref count of the record
      associated to uinput_release.
      
      Remove uinput, which frees the records including the one that represents
      uinput_release.
      
      Load the uinput module again (and make sure it's at the same address).
      This recreates the function records all with a ref count of zero,
      including uinput_release.
      
      Disable function tracing, which will decrement the ref count for uinput_release
      which is now zero because of the module removal and reload, and we have
      a mismatch (below zero ref count).
      
      The solution is to check all currently tracing ftrace callbacks to see if any
      are tracing any of the module's functions when a module is loaded (it already does
      that with callbacks that trace all functions). If a callback happens to have
      a module function being traced, it increments that records ref count and starts
      tracing that function.
      
      There may be a strange side effect with this, where tracing module functions
      on unload and then reloading a new module may have that new module's functions
      being traced. This may be something that confuses the user, but it's not
      a big deal. Another approach is to disable all callback hashes on module unload,
      but this leaves some ftrace callbacks that may not be registered, but can
      still have hashes tracing the module's function where ftrace doesn't know about
      it. That situation can cause the same bug. This solution solves that case too.
      Another benefit of this solution, is it is possible to trace a module's
      function on unload and load.
      
      Link: http://lkml.kernel.org/r/20130705142629.GA325@redhat.comReported-by: default avatarJörn Engel <joern@logfs.org>
      Reported-by: default avatarDave Jones <davej@redhat.com>
      Reported-by: default avatarSteve Hodgson <steve@purestorage.com>
      Tested-by: default avatarSteve Hodgson <steve@purestorage.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      f05e999f
    • Steven Rostedt (Red Hat)'s avatar
      tracing/uprobes: Fail to unregister if probe event files are in use · 396dd500
      Steven Rostedt (Red Hat) authored
      commit c6c2401d8bbaf9edc189b4c35a8cb2780b8b988e upstream.
      
      Uprobes suffer the same problem that kprobes have. There's a race between
      writing to the "enable" file and removing the probe. The probe checks for
      it being in use and if it is not, goes about deleting the probe and the
      event that represents it. But the problem with that is, after it checks
      if it is in use it can be enabled, and the deletion of the event (access
      to the probe) will fail, as it is in use. But the uprobe will still be
      deleted. This is a problem as the event can reference the uprobe that
      was deleted.
      
      The fix is to remove the event first, and check to make sure the event
      removal succeeds. Then it is safe to remove the probe.
      
      When the event exists, either ftrace or perf can enable the probe and
      prevent the event from being removed.
      
      Link: http://lkml.kernel.org/r/20130704034038.991525256@goodmis.orgAcked-by: default avatarOleg Nesterov <oleg@redhat.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      396dd500
    • Steven Rostedt (Red Hat)'s avatar
      tracing/kprobes: Fail to unregister if probe event files are in use · 428fbcf9
      Steven Rostedt (Red Hat) authored
      commit 40c32592668b727cbfcf7b1c0567f581bd62a5e4 upstream.
      
      When a probe is being removed, it cleans up the event files that correspond
      to the probe. But there is a race between writing to one of these files
      and deleting the probe. This is especially true for the "enable" file.
      
      	CPU 0				CPU 1
      	-----				-----
      
      				  fd = open("enable",O_WRONLY);
      
        probes_open()
        release_all_trace_probes()
        unregister_trace_probe()
        if (trace_probe_is_enabled(tp))
      	return -EBUSY
      
      				   write(fd, "1", 1)
      				   __ftrace_set_clr_event()
      				   call->class->reg()
      				    (kprobe_register)
      				     enable_trace_probe(tp)
      
        __unregister_trace_probe(tp);
        list_del(&tp->list)
        unregister_probe_event(tp) <-- fails!
        free_trace_probe(tp)
      
      				   write(fd, "0", 1)
      				   __ftrace_set_clr_event()
      				   call->class->unreg
      				    (kprobe_register)
      				    disable_trace_probe(tp) <-- BOOM!
      
      A test program was written that used two threads to simulate the
      above scenario adding a nanosleep() interval to change the timings
      and after several thousand runs, it was able to trigger this bug
      and crash:
      
      BUG: unable to handle kernel paging request at 00000005000000f9
      IP: [<ffffffff810dee70>] probes_open+0x3b/0xa7
      PGD 7808a067 PUD 0
      Oops: 0000 [#1] PREEMPT SMP
      Dumping ftrace buffer:
      ---------------------------------
      Modules linked in: ipt_MASQUERADE sunrpc ip6t_REJECT nf_conntrack_ipv6
      CPU: 1 PID: 2070 Comm: test-kprobe-rem Not tainted 3.11.0-rc3-test+ #47
      Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./To be filled by O.E.M., BIOS SDBLI944.86P 05/08/2007
      task: ffff880077756440 ti: ffff880076e52000 task.ti: ffff880076e52000
      RIP: 0010:[<ffffffff810dee70>]  [<ffffffff810dee70>] probes_open+0x3b/0xa7
      RSP: 0018:ffff880076e53c38  EFLAGS: 00010203
      RAX: 0000000500000001 RBX: ffff88007844f440 RCX: 0000000000000003
      RDX: 0000000000000003 RSI: 0000000000000003 RDI: ffff880076e52000
      RBP: ffff880076e53c58 R08: ffff880076e53bd8 R09: 0000000000000000
      R10: ffff880077756440 R11: 0000000000000006 R12: ffffffff810dee35
      R13: ffff880079250418 R14: 0000000000000000 R15: ffff88007844f450
      FS:  00007f87a276f700(0000) GS:ffff88007d480000(0000) knlGS:0000000000000000
      CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
      CR2: 00000005000000f9 CR3: 0000000077262000 CR4: 00000000000007e0
      Stack:
       ffff880076e53c58 ffffffff81219ea0 ffff88007844f440 ffffffff810dee35
       ffff880076e53ca8 ffffffff81130f78 ffff8800772986c0 ffff8800796f93a0
       ffffffff81d1b5d8 ffff880076e53e04 0000000000000000 ffff88007844f440
      Call Trace:
       [<ffffffff81219ea0>] ? security_file_open+0x2c/0x30
       [<ffffffff810dee35>] ? unregister_trace_probe+0x4b/0x4b
       [<ffffffff81130f78>] do_dentry_open+0x162/0x226
       [<ffffffff81131186>] finish_open+0x46/0x54
       [<ffffffff8113f30b>] do_last+0x7f6/0x996
       [<ffffffff8113cc6f>] ? inode_permission+0x42/0x44
       [<ffffffff8113f6dd>] path_openat+0x232/0x496
       [<ffffffff8113fc30>] do_filp_open+0x3a/0x8a
       [<ffffffff8114ab32>] ? __alloc_fd+0x168/0x17a
       [<ffffffff81131f4e>] do_sys_open+0x70/0x102
       [<ffffffff8108f06e>] ? trace_hardirqs_on_caller+0x160/0x197
       [<ffffffff81131ffe>] SyS_open+0x1e/0x20
       [<ffffffff81522742>] system_call_fastpath+0x16/0x1b
      Code: e5 41 54 53 48 89 f3 48 83 ec 10 48 23 56 78 48 39 c2 75 6c 31 f6 48 c7
      RIP  [<ffffffff810dee70>] probes_open+0x3b/0xa7
       RSP <ffff880076e53c38>
      CR2: 00000005000000f9
      ---[ end trace 35f17d68fc569897 ]---
      
      The unregister_trace_probe() must be done first, and if it fails it must
      fail the removal of the kprobe.
      
      Several changes have already been made by Oleg Nesterov and Masami Hiramatsu
      to allow moving the unregister_probe_event() before the removal of
      the probe and exit the function if it fails. This prevents the tp
      structure from being used after it is freed.
      
      Link: http://lkml.kernel.org/r/20130704034038.819592356@goodmis.orgAcked-by: default avatarMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      428fbcf9
    • Oleg Nesterov's avatar
      tracing: trace_remove_event_call() should fail if call/file is in use · 8169887b
      Oleg Nesterov authored
      commit 2816c551c796ec14620325b2c9ed75b9979d3125 upstream.
      
      Change trace_remove_event_call(call) to return the error if this
      call is active. This is what the callers assume but can't verify
      outside of the tracing locks. Both trace_kprobe.c/trace_uprobe.c
      need the additional changes, unregister_trace_probe() should abort
      if trace_remove_event_call() fails.
      
      The caller is going to free this call/file so we must ensure that
      nobody can use them after trace_remove_event_call() succeeds.
      debugfs should be fine after the previous changes and event_remove()
      does TRACE_REG_UNREGISTER, but still there are 2 reasons why we need
      the additional checks:
      
      - There could be a perf_event(s) attached to this tp_event, so the
        patch checks ->perf_refcount.
      
      - TRACE_REG_UNREGISTER can be suppressed by FTRACE_EVENT_FL_SOFT_MODE,
        so we simply check FTRACE_EVENT_FL_ENABLED protected by event_mutex.
      
      Link: http://lkml.kernel.org/r/20130729175033.GB26284@redhat.comReviewed-by: default avatarMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: default avatarOleg Nesterov <oleg@redhat.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      8169887b
    • Oleg Nesterov's avatar
      tracing: Change remove_event_file_dir() to clear "d_subdirs"->i_private · 012dc156
      Oleg Nesterov authored
      commit bf682c3159c4d298d1126a56793ed3f5e80395f7 upstream.
      
      Change remove_event_file_dir() to clear ->i_private for every
      file we are going to remove.
      
      We need to check file->dir != NULL because event_create_dir()
      can fail. debugfs_remove_recursive(NULL) is fine but the patch
      moves it under the same check anyway for readability.
      
      spin_lock(d_lock) and "d_inode != NULL" check are not needed
      afaics, but I do not understand this code enough.
      
      tracing_open_generic_file() and tracing_release_generic_file()
      can go away, ftrace_enable_fops and ftrace_event_filter_fops()
      use tracing_open_generic() but only to check tracing_disabled.
      
      This fixes all races with event_remove() or instance_delete().
      f_op->read/write/whatever can never use the freed file/call,
      all event/* files were changed to check and use ->i_private
      under event_mutex.
      
      Note: this doesn't not fix other problems, event_remove() can
      destroy the active ftrace_event_call, we need more changes but
      those changes are completely orthogonal.
      
      Link: http://lkml.kernel.org/r/20130728183527.GB16723@redhat.comReviewed-by: default avatarMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: default avatarOleg Nesterov <oleg@redhat.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      012dc156
    • Oleg Nesterov's avatar
      tracing: Introduce remove_event_file_dir() · c6febdf2
      Oleg Nesterov authored
      commit f6a84bdc75b5c11621dec58db73fe102cbaf40cc upstream.
      
      Preparation for the next patch. Extract the common code from
      remove_event_from_tracers() and __trace_remove_event_dirs()
      into the new helper, remove_event_file_dir().
      
      The patch looks more complicated than it actually is, it also
      moves remove_subsystem() up to avoid the forward declaration.
      
      Link: http://lkml.kernel.org/r/20130726172547.GA3629@redhat.comReviewed-by: default avatarMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: default avatarOleg Nesterov <oleg@redhat.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      c6febdf2
    • Oleg Nesterov's avatar
      tracing: Change f_start() to take event_mutex and verify i_private != NULL · b86d0ba6
      Oleg Nesterov authored
      commit c5a44a1200c6eda2202434f25325e8ad19533fca upstream.
      
      trace_format_open() and trace_format_seq_ops are racy, nothing
      protects ftrace_event_call from trace_remove_event_call().
      
      Change f_start() to take event_mutex and verify i_private != NULL,
      change f_stop() to drop this lock.
      
      This fixes nothing, but now we can change debugfs_remove("format")
      callers to nullify ->i_private and fix the the problem.
      
      Note: the usage of event_mutex is sub-optimal but simple, we can
      change this later.
      
      Link: http://lkml.kernel.org/r/20130726172543.GA3622@redhat.comReviewed-by: default avatarMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: default avatarOleg Nesterov <oleg@redhat.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      
      b86d0ba6
    • Oleg Nesterov's avatar
      tracing: Change event_filter_read/write to verify i_private != NULL · 70c91fb9
      Oleg Nesterov authored
      commit e2912b091c26b8ea95e5e00a43a7ac620f6c94a6 upstream.
      
      event_filter_read/write() are racy, ftrace_event_call can be already
      freed by trace_remove_event_call() callers.
      
      1. Shift mutex_lock(event_mutex) from print/apply_event_filter to
         the callers.
      
      2. Change the callers, event_filter_read() and event_filter_write()
         to read i_private under this mutex and abort if it is NULL.
      
      This fixes nothing, but now we can change debugfs_remove("filter")
      callers to nullify ->i_private and fix the the problem.
      
      Link: http://lkml.kernel.org/r/20130726172540.GA3619@redhat.comReviewed-by: default avatarMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: default avatarOleg Nesterov <oleg@redhat.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      
      70c91fb9