1. 22 Feb, 2014 2 commits
    • Oleg Nesterov's avatar
      md/raid5: Fix CPU hotplug callback registration · 4d4ef86d
      Oleg Nesterov authored
      commit 789b5e0315284463617e106baad360cb9e8db3ac upstream.
      
      Subsystems that want to register CPU hotplug callbacks, as well as perform
      initialization for the CPUs that are already online, often do it as shown
      below:
      
      	get_online_cpus();
      
      	for_each_online_cpu(cpu)
      		init_cpu(cpu);
      
      	register_cpu_notifier(&foobar_cpu_notifier);
      
      	put_online_cpus();
      
      This is wrong, since it is prone to ABBA deadlocks involving the
      cpu_add_remove_lock and the cpu_hotplug.lock (when running concurrently
      with CPU hotplug operations).
      
      Interestingly, the raid5 code can actually prevent double initialization and
      hence can use the following simplified form of callback registration:
      
      	register_cpu_notifier(&foobar_cpu_notifier);
      
      	get_online_cpus();
      
      	for_each_online_cpu(cpu)
      		init_cpu(cpu);
      
      	put_online_cpus();
      
      A hotplug operation that occurs between registering the notifier and calling
      get_online_cpus(), won't disrupt anything, because the code takes care to
      perform the memory allocations only once.
      
      So reorganize the code in raid5 this way to fix the deadlock with callback
      registration.
      
      Cc: linux-raid@vger.kernel.org
      Fixes: 36d1c647Signed-off-by: default avatarOleg Nesterov <oleg@redhat.com>
      [Srivatsa: Fixed the unregister_cpu_notifier() deadlock, added the
      free_scratch_buffer() helper to condense code further and wrote the changelog.]
      Signed-off-by: default avatarSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      4d4ef86d
    • NeilBrown's avatar
      md/raid1: restore ability for check and repair to fix read errors. · 9f2d2899
      NeilBrown authored
      commit 1877db75589a895bbdc4c4c3f23558e57b521141 upstream.
      
      commit 30bc9b53878a9921b02e3b5bc4283ac1c6de102a
          md/raid1: fix bio handling problems in process_checks()
      
      Move the bio_reset() to a point before where BIO_UPTODATE is checked,
      so that check now always report that the bio is uptodate, even if it is not.
      
      This causes process_check() to sometimes treat read-errors as
      successful matches so the good data isn't written out.
      
      This patch preserves the flag until it is needed.
      
      Bug was introduced in 3.11, but backported to 3.10-stable (as it fixed
      an even worse bug).  So suitable for any -stable since 3.10.
      Reported-and-tested-by: default avatarMichael Tokarev <mjt@tls.msk.ru>
      Fixed: 30bc9b53878a9921b02e3b5bc4283ac1c6de102a
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      9f2d2899
  2. 13 Feb, 2014 7 commits
    • Mikulas Patocka's avatar
      dm sysfs: fix a module unload race · 4f664036
      Mikulas Patocka authored
      commit 2995fa78e423d7193f3b57835f6c1c75006a0315 upstream.
      
      This reverts commit be35f48610 ("dm: wait until embedded kobject is
      released before destroying a device") and provides an improved fix.
      
      The kobject release code that calls the completion must be placed in a
      non-module file, otherwise there is a module unload race (if the process
      calling dm_kobject_release is preempted and the DM module unloaded after
      the completion is triggered, but before dm_kobject_release returns).
      
      To fix this race, this patch moves the completion code to dm-builtin.c
      which is always compiled directly into the kernel if BLK_DEV_DM is
      selected.
      
      The patch introduces a new dm_kobject_holder structure, its purpose is
      to keep the completion and kobject in one place, so that it can be
      accessed from non-module code without the need to export the layout of
      struct mapped_device to that code.
      Signed-off-by: default avatarMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      
      4f664036
    • Joe Thornber's avatar
      dm space map metadata: fix bug in resizing of thin metadata · 88972eec
      Joe Thornber authored
      commit fca028438fb903852beaf7c3fe1cd326651af57d upstream.
      
      This bug was introduced in commit 7e664b3dec431e ("dm space map metadata:
      fix extending the space map").
      
      When extending a dm-thin metadata volume we:
      
      - Switch the space map into a simple bootstrap mode, which allocates
        all space linearly from the newly added space.
      - Add new bitmap entries for the new space
      - Increment the reference counts for those newly allocated bitmap
        entries
      - Commit changes to disk
      - Switch back out of bootstrap mode.
      
      But, the disk commit may allocate space itself, if so this fact will be
      lost when switching out of bootstrap mode.
      
      The bug exhibited itself as an error when the bitmap_root, with an
      erroneous ref count of 0, was subsequently decremented as part of a
      later disk commit.  This would cause the disk commit to fail, and thinp
      to enter read_only mode.  The metadata was not damaged (thin_check
      passed).
      
      The fix is to put the increments + commit into a loop, running until
      the commit has not allocated extra space.  In practise this loop only
      runs twice.
      
      With this fix the following device mapper testsuite test passes:
       dmtest run --suite thin-provisioning -n thin_remove_works_after_resize
      Signed-off-by: default avatarJoe Thornber <ejt@redhat.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      88972eec
    • Joe Thornber's avatar
      dm space map metadata: fix extending the space map · f15396a3
      Joe Thornber authored
      commit 7e664b3dec431eebf0c5df5ff704d6197634cf35 upstream.
      
      When extending a metadata space map we should do the first commit whilst
      still in bootstrap mode -- a mode where all blocks get allocated in the
      new area.
      
      That way the commit overhead is allocated from the newly added space.
      Otherwise we risk running out of space.
      
      With this fix, and the previous commit "dm space map common: make sure
      new space is used during extend", the following device mapper testsuite
      test passes:
       dmtest run --suite thin-provisioning -n /resize_metadata_no_io/
      Signed-off-by: default avatarJoe Thornber <ejt@redhat.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      f15396a3
    • Joe Thornber's avatar
      dm space map common: make sure new space is used during extend · 24737341
      Joe Thornber authored
      commit 12c91a5c2d2a8e8cc40a9552313e1e7b0a2d9ee3 upstream.
      
      When extending a low level space map we should update nr_blocks at
      the start so the new space is used for the index entries.
      
      Otherwise extend can fail, e.g.: sm_metadata_extend call sequence
      that fails:
       -> sm_ll_extend
          -> dm_tm_new_block -> dm_sm_new_block -> sm_bootstrap_new_block
          => returns -ENOSPC because smm->begin == smm->ll.nr_blocks
      Signed-off-by: default avatarJoe Thornber <ejt@redhat.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      24737341
    • Mikulas Patocka's avatar
      dm: wait until embedded kobject is released before destroying a device · eef2b6df
      Mikulas Patocka authored
      commit be35f486108227e10fe5d96fd42fb2b344c59983 upstream.
      
      There may be other parts of the kernel holding a reference on the dm
      kobject.  We must wait until all references are dropped before
      deallocating the mapped_device structure.
      
      The dm_kobject_release method signals that all references are dropped
      via completion.  But dm_kobject_release doesn't free the kobject (which
      is embedded in the mapped_device structure).
      
      This is the sequence of operations:
      * when destroying a DM device, call kobject_put from dm_sysfs_exit
      * wait until all users stop using the kobject, when it happens the
        release method is called
      * the release method signals the completion and should return without
        delay
      * the dm device removal code that waits on the completion continues
      * the dm device removal code drops the dm_mod reference the device had
      * the dm device removal code frees the mapped_device structure that
        contains the kobject
      
      Using kobject this way should avoid the module unload race that was
      mentioned at the beginning of this thread:
      https://lkml.org/lkml/2014/1/4/83Signed-off-by: default avatarMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      eef2b6df
    • Mike Snitzer's avatar
      dm thin: initialize dm_thin_new_mapping returned by get_next_mapping · 025d61e0
      Mike Snitzer authored
      commit 16961b042db8cc5cf75d782b4255193ad56e1d4f upstream.
      
      As additional members are added to the dm_thin_new_mapping structure
      care should be taken to make sure they get initialized before use.
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      Acked-by: default avatarJoe Thornber <ejt@redhat.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      025d61e0
    • Joe Thornber's avatar
      dm thin: fix discard support to a previously shared block · 614319df
      Joe Thornber authored
      commit 19fa1a6756ed9e92daa9537c03b47d6b55cc2316 upstream.
      
      If a snapshot is created and later deleted the origin dm_thin_device's
      snapshotted_time will have been updated to reflect the snapshot's
      creation time.  The 'shared' flag in the dm_thin_lookup_result struct
      returned from dm_thin_find_block() is an approximation based on
      snapshotted_time -- this is done to avoid 0(n), or worse, time
      complexity.  In this case, the shared flag would be true.
      
      But because the 'shared' flag reflects an approximation a block can be
      incorrectly assumed to be shared (e.g. false positive for 'shared'
      because the snapshot no longer exists).  This could result in discards
      issued to a thin device not being passed down to the pool's underlying
      data device.
      
      To fix this we double check that a thin block is really still in-use
      after a mapping is removed using dm_pool_block_is_used().  If the
      reference count for a block is now zero the discard is allowed to be
      passed down.
      
      Also add a 'definitely_not_shared' member to the dm_thin_new_mapping
      structure -- reflects that the 'shared' flag in the response from
      dm_thin_find_block() can only be held as definitive if false is
      returned.
      
      Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1043527Signed-off-by: default avatarJoe Thornber <ejt@redhat.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      614319df
  3. 06 Feb, 2014 2 commits
    • Kent Overstreet's avatar
      bcache: Data corruption fix · f4ac67e8
      Kent Overstreet authored
      commit ef71ec00002d92a08eb27e9d036e3d48835b6597 upstream.
      
      The code that handles overlapping extents that we've just read back in from disk
      was depending on the behaviour of the code that handles overlapping extents as
      we're inserting into a btree node in the case of an insert that forced an
      existing extent to be split: on insert, if we had to split we'd also insert a
      new extent to represent the top part of the old extent - and then that new
      extent would get written out.
      
      The code that read the extents back in thus not bother with splitting extents -
      if it saw an extent that ovelapped in the middle of an older extent, it would
      trim the old extent to only represent the bottom part, assuming that the
      original insert would've inserted a new extent to represent the top part.
      
      I still haven't figured out _how_ it can happen, but I'm now pretty convinced
      (and testing has confirmed) that there's some kind of an obscure corner case
      (probably involving extent merging, and multiple overwrites in different sets)
      that breaks this. The fix is to change the mergesort fixup code to split extents
      itself when required.
      Signed-off-by: default avatarKent Overstreet <kmo@daterainc.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      f4ac67e8
    • NeilBrown's avatar
      md/raid5: fix long-standing problem with bitmap handling on write failure. · 6ba854e9
      NeilBrown authored
      commit 9f97e4b128d2ea90a5f5063ea0ee3b0911f4c669 upstream.
      
      Before a write starts we set a bit in the write-intent bitmap.
      When the write completes we clear that bit if the write was successful
      to all devices.  However if the write wasn't fully successful we
      should not clear the bit.  If the faulty drive is subsequently
      re-added, the fact that the bit is still set ensure that we will
      re-write the data that is missing.
      
      This logic is mediated by the STRIPE_DEGRADED flag - we only clear the
      bitmap bit when this flag is not set.
      Currently we correctly set the flag if a write starts when some
      devices are failed or missing.  But we do *not* set the flag if some
      device failed during the write attempt.
      This is wrong and can result in clearing the bit inappropriately.
      
      So: set the flag when a write fails.
      
      This bug has been present since bitmaps were introduces, so the fix is
      suitable for any -stable kernel.
      Reported-by: default avatarEthan Wilson <ethan.wilson@shiftmail.org>
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      6ba854e9
  4. 25 Jan, 2014 4 commits
  5. 20 Dec, 2013 7 commits
    • Joe Thornber's avatar
      dm thin: switch to read only mode if a mapping insert fails · f4cf4b1b
      Joe Thornber authored
      commit fafc7a815e40255d24e80a1cb7365892362fa398 upstream.
      
      Switch the thin pool to read-only mode when dm_thin_insert_block() fails
      since there is little reason to expect the cause of the failure to be
      resolved without further action by user space.
      
      This issue was noticed with the device-mapper-test-suite using:
      dmtest run --suite thin-provisioning -n /exhausting_metadata_space_causes_fail_mode/
      
      The quantity of errors logged in this case must be reduced.
      
      before patch:
      
      device-mapper: thin: dm_thin_insert_block() failed
      device-mapper: space map metadata: unable to allocate new metadata block
      device-mapper: thin: dm_thin_insert_block() failed
      device-mapper: space map metadata: unable to allocate new metadata block
      device-mapper: thin: dm_thin_insert_block() failed
      device-mapper: space map metadata: unable to allocate new metadata block
      device-mapper: thin: dm_thin_insert_block() failed
      device-mapper: space map metadata: unable to allocate new metadata block
      device-mapper: thin: dm_thin_insert_block() failed
      device-mapper: space map metadata: unable to allocate new metadata block
      device-mapper: thin: dm_thin_insert_block() failed
      device-mapper: space map metadata: unable to allocate new metadata block
      device-mapper: thin: dm_thin_insert_block() failed
      device-mapper: space map metadata: unable to allocate new metadata block
      device-mapper: thin: dm_thin_insert_block() failed
      device-mapper: space map metadata: unable to allocate new metadata block
      device-mapper: thin: dm_thin_insert_block() failed
      device-mapper: space map metadata: unable to allocate new metadata block
      device-mapper: thin: dm_thin_insert_block() failed
      device-mapper: space map metadata: unable to allocate new metadata block
      device-mapper: space map metadata: unable to allocate new metadata block
      device-mapper: space map metadata: unable to allocate new metadata block
      device-mapper: space map metadata: unable to allocate new metadata block
      device-mapper: space map metadata: unable to allocate new metadata block
      device-mapper: space map metadata: unable to allocate new metadata block
      <snip ... these repeat for a long while ... >
      device-mapper: space map metadata: unable to allocate new metadata block
      device-mapper: space map common: dm_tm_shadow_block() failed
      device-mapper: thin: 253:4: no free metadata space available.
      device-mapper: thin: 253:4: switching pool to read-only mode
      
      after patch:
      
      device-mapper: space map metadata: unable to allocate new metadata block
      device-mapper: thin: 253:4: dm_thin_insert_block() failed: error = -28
      device-mapper: thin: 253:4: switching pool to read-only mode
      Signed-off-by: default avatarJoe Thornber <ejt@redhat.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      f4cf4b1b
    • Mikulas Patocka's avatar
      dm table: fail dm_table_create on dm_round_up overflow · 135949c1
      Mikulas Patocka authored
      commit 5b2d06576c5410c10d95adfd5c4d8b24de861d87 upstream.
      
      The dm_round_up function may overflow to zero.  In this case,
      dm_table_create() must fail rather than go on to allocate an empty array
      with alloc_targets().
      
      This fixes a possible memory corruption that could be caused by passing
      too large a number in "param->target_count".
      Signed-off-by: default avatarMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      135949c1
    • Mike Snitzer's avatar
      dm space map metadata: return on failure in sm_metadata_new_block · 2c54d62a
      Mike Snitzer authored
      commit f62b6b8f498658a9d537c7d380e9966f15e1b2a1 upstream.
      
      Commit 2fc48021 ("dm persistent
      metadata: add space map threshold callback") introduced a regression
      to the metadata block allocation path that resulted in errors being
      ignored.  This regression was uncovered by running the following
      device-mapper-test-suite test:
      dmtest run --suite thin-provisioning -n /exhausting_metadata_space_causes_fail_mode/
      
      The ignored error codes in sm_metadata_new_block() could crash the
      kernel through use of either the dm-thin or dm-cache targets, e.g.:
      
      device-mapper: thin: 253:4: reached low water mark for metadata device: sending event.
      device-mapper: space map metadata: unable to allocate new metadata block
      general protection fault: 0000 [#1] SMP
      ...
      Workqueue: dm-thin do_worker [dm_thin_pool]
      task: ffff880035ce2ab0 ti: ffff88021a054000 task.ti: ffff88021a054000
      RIP: 0010:[<ffffffffa0331385>]  [<ffffffffa0331385>] metadata_ll_load_ie+0x15/0x30 [dm_persistent_data]
      RSP: 0018:ffff88021a055a68  EFLAGS: 00010202
      RAX: 003fc8243d212ba0 RBX: ffff88021a780070 RCX: ffff88021a055a78
      RDX: ffff88021a055a78 RSI: 0040402222a92a80 RDI: ffff88021a780070
      RBP: ffff88021a055a68 R08: ffff88021a055ba4 R09: 0000000000000010
      R10: 0000000000000000 R11: 00000002a02e1000 R12: ffff88021a055ad4
      R13: 0000000000000598 R14: ffffffffa0338470 R15: ffff88021a055ba4
      FS:  0000000000000000(0000) GS:ffff88033fca0000(0000) knlGS:0000000000000000
      CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
      CR2: 00007f467c0291b8 CR3: 0000000001a0b000 CR4: 00000000000007e0
      Stack:
       ffff88021a055ab8 ffffffffa0332020 ffff88021a055b30 0000000000000001
       ffff88021a055b30 0000000000000000 ffff88021a055b18 0000000000000000
       ffff88021a055ba4 ffff88021a055b98 ffff88021a055ae8 ffffffffa033304c
      Call Trace:
       [<ffffffffa0332020>] sm_ll_lookup_bitmap+0x40/0xa0 [dm_persistent_data]
       [<ffffffffa033304c>] sm_metadata_count_is_more_than_one+0x8c/0xc0 [dm_persistent_data]
       [<ffffffffa0333825>] dm_tm_shadow_block+0x65/0x110 [dm_persistent_data]
       [<ffffffffa0331b00>] sm_ll_mutate+0x80/0x300 [dm_persistent_data]
       [<ffffffffa0330e60>] ? set_ref_count+0x10/0x10 [dm_persistent_data]
       [<ffffffffa0331dba>] sm_ll_inc+0x1a/0x20 [dm_persistent_data]
       [<ffffffffa0332270>] sm_disk_new_block+0x60/0x80 [dm_persistent_data]
       [<ffffffff81520036>] ? down_write+0x16/0x40
       [<ffffffffa001e5c4>] dm_pool_alloc_data_block+0x54/0x80 [dm_thin_pool]
       [<ffffffffa001b23c>] alloc_data_block+0x9c/0x130 [dm_thin_pool]
       [<ffffffffa001c27e>] provision_block+0x4e/0x180 [dm_thin_pool]
       [<ffffffffa001fe9a>] ? dm_thin_find_block+0x6a/0x110 [dm_thin_pool]
       [<ffffffffa001c57a>] process_bio+0x1ca/0x1f0 [dm_thin_pool]
       [<ffffffff8111e2ed>] ? mempool_free+0x8d/0xa0
       [<ffffffffa001d755>] process_deferred_bios+0xc5/0x230 [dm_thin_pool]
       [<ffffffffa001d911>] do_worker+0x51/0x60 [dm_thin_pool]
       [<ffffffff81067872>] process_one_work+0x182/0x3b0
       [<ffffffff81068c90>] worker_thread+0x120/0x3a0
       [<ffffffff81068b70>] ? manage_workers+0x160/0x160
       [<ffffffff8106eb2e>] kthread+0xce/0xe0
       [<ffffffff8106ea60>] ? kthread_freezable_should_stop+0x70/0x70
       [<ffffffff8152af6c>] ret_from_fork+0x7c/0xb0
       [<ffffffff8106ea60>] ? kthread_freezable_should_stop+0x70/0x70
       [<ffffffff8152af6c>] ret_from_fork+0x7c/0xb0
       [<ffffffff8106ea60>] ? kthread_freezable_should_stop+0x70/0x70
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      Acked-by: default avatarJoe Thornber <ejt@redhat.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      2c54d62a
    • Mikulas Patocka's avatar
      dm delay: fix a possible deadlock due to shared workqueue · 729d38d1
      Mikulas Patocka authored
      commit 718822c1c112dc99e0c72c8968ee1db9d9d910f0 upstream.
      
      The dm-delay target uses a shared workqueue for multiple instances.  This
      can cause deadlock if two or more dm-delay targets are stacked on the top
      of each other.
      
      This patch changes dm-delay to use a per-instance workqueue.
      Signed-off-by: default avatarMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      729d38d1
    • Joe Thornber's avatar
      dm array: fix a reference counting bug in shadow_ablock · 1cfc4552
      Joe Thornber authored
      commit ed9571f0cf1fe09d3506302610f3ccdfa1d22c4a upstream.
      
      An old array block could have its reference count decremented below
      zero when it is being replaced in the btree by a new array block.
      
      The fix is to increment the old ablock's reference count just before
      inserting a new ablock into the btree.
      Signed-off-by: default avatarJoe Thornber <ejt@redhat.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      1cfc4552
    • Mikulas Patocka's avatar
      dm snapshot: avoid snapshot space leak on crash · 20d68d38
      Mikulas Patocka authored
      commit 230c83afdd9cd384348475bea1e14b80b3b6b1b8 upstream.
      
      There is a possible leak of snapshot space in case of crash.
      
      The reason for space leaking is that chunks in the snapshot device are
      allocated sequentially, but they are finished (and stored in the metadata)
      out of order, depending on the order in which copying finished.
      
      For example, supposed that the metadata contains the following records
      SUPERBLOCK
      METADATA (blocks 0 ... 250)
      DATA 0
      DATA 1
      DATA 2
      ...
      DATA 250
      
      Now suppose that you allocate 10 new data blocks 251-260. Suppose that
      copying of these blocks finish out of order (block 260 finished first
      and the block 251 finished last). Now, the snapshot device looks like
      this:
      SUPERBLOCK
      METADATA (blocks 0 ... 250, 260, 259, 258, 257, 256)
      DATA 0
      DATA 1
      DATA 2
      ...
      DATA 250
      DATA 251
      DATA 252
      DATA 253
      DATA 254
      DATA 255
      METADATA (blocks 255, 254, 253, 252, 251)
      DATA 256
      DATA 257
      DATA 258
      DATA 259
      DATA 260
      
      Now, if the machine crashes after writing the first metadata block but
      before writing the second metadata block, the space for areas DATA 250-255
      is leaked, it contains no valid data and it will never be used in the
      future.
      
      This patch makes dm-snapshot complete exceptions in the same order they
      were allocated, thus fixing this bug.
      
      Note: when backporting this patch to the stable kernel, change the version
      field in the following way:
      * if version in the stable kernel is {1, 11, 1}, change it to {1, 12, 0}
      * if version in the stable kernel is {1, 10, 0} or {1, 10, 1}, change it
        to {1, 10, 2}
      Userspace reads the version to determine if the bug was fixed, so the
      version change is needed.
      Signed-off-by: default avatarMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      20d68d38
    • Mikulas Patocka's avatar
      dm bufio: initialize read-only module parameters · d468a287
      Mikulas Patocka authored
      commit 4cb57ab4a2e61978f3a9b7d4f53988f30d61c27f upstream.
      
      Some module parameters in dm-bufio are read-only. These parameters
      inform the user about memory consumption. They are not supposed to be
      changed by the user.
      
      However, despite being read-only, these parameters can be set on
      modprobe or insmod command line, for example:
      modprobe dm-bufio current_allocated_bytes=12345
      
      The kernel doesn't expect that these variables can be non-zero at module
      initialization and if the user sets them, it results in BUG.
      
      This patch initializes the variables in the module init routine, so that
      user-supplied values are ignored.
      Signed-off-by: default avatarMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      d468a287
  6. 04 Dec, 2013 5 commits
  7. 13 Nov, 2013 5 commits
    • Lukasz Dorau's avatar
      md: Fix skipping recovery for read-only arrays. · ed840bec
      Lukasz Dorau authored
      commit 61e4947c99c4494336254ec540c50186d186150b upstream.
      
      Since:
              commit 7ceb17e8
              md: Allow devices to be re-added to a read-only array.
      
      spares are activated on a read-only array. In case of raid1 and raid10
      personalities it causes that not-in-sync devices are marked in-sync
      without checking if recovery has been finished.
      
      If a read-only array is degraded and one of its devices is not in-sync
      (because the array has been only partially recovered) recovery will be skipped.
      
      This patch adds checking if recovery has been finished before marking a device
      in-sync for raid1 and raid10 personalities. In case of raid5 personality
      such condition is already present (at raid5.c:6029).
      
      Bug was introduced in 3.10 and causes data corruption.
      Signed-off-by: default avatarPawel Baldysiak <pawel.baldysiak@intel.com>
      Signed-off-by: default avatarLukasz Dorau <lukasz.dorau@intel.com>
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      ed840bec
    • Bian Yu's avatar
      md: avoid deadlock when md_set_badblocks. · 04654966
      Bian Yu authored
      commit 905b0297a9533d7a6ee00a01a990456636877dd6 upstream.
      
      When operate harddisk and hit errors, md_set_badblocks is called after
      scsi_restart_operations which already disabled the irq. but md_set_badblocks
      will call write_sequnlock_irq and enable irq. so softirq can preempt the
      current thread and that may cause a deadlock. I think this situation should
      use write_sequnlock_irqsave/irqrestore instead.
      
      I met the situation and the call trace is below:
      [  638.919974] BUG: spinlock recursion on CPU#0, scsi_eh_13/1010
      [  638.921923]  lock: 0xffff8800d4d51fc8, .magic: dead4ead, .owner: scsi_eh_13/1010, .owner_cpu: 0
      [  638.923890] CPU: 0 PID: 1010 Comm: scsi_eh_13 Not tainted 3.12.0-rc5+ #37
      [  638.925844] Hardware name: To be filled by O.E.M. To be filled by O.E.M./MAHOBAY, BIOS 4.6.5 03/05/2013
      [  638.927816]  ffff880037ad4640 ffff880118c03d50 ffffffff8172ff85 0000000000000007
      [  638.929829]  ffff8800d4d51fc8 ffff880118c03d70 ffffffff81730030 ffff8800d4d51fc8
      [  638.931848]  ffffffff81a72eb0 ffff880118c03d90 ffffffff81730056 ffff8800d4d51fc8
      [  638.933884] Call Trace:
      [  638.935867]  <IRQ>  [<ffffffff8172ff85>] dump_stack+0x55/0x76
      [  638.937878]  [<ffffffff81730030>] spin_dump+0x8a/0x8f
      [  638.939861]  [<ffffffff81730056>] spin_bug+0x21/0x26
      [  638.941836]  [<ffffffff81336de4>] do_raw_spin_lock+0xa4/0xc0
      [  638.943801]  [<ffffffff8173f036>] _raw_spin_lock+0x66/0x80
      [  638.945747]  [<ffffffff814a73ed>] ? scsi_device_unbusy+0x9d/0xd0
      [  638.947672]  [<ffffffff8173fb1b>] ? _raw_spin_unlock+0x2b/0x50
      [  638.949595]  [<ffffffff814a73ed>] scsi_device_unbusy+0x9d/0xd0
      [  638.951504]  [<ffffffff8149ec47>] scsi_finish_command+0x37/0xe0
      [  638.953388]  [<ffffffff814a75e8>] scsi_softirq_done+0xa8/0x140
      [  638.955248]  [<ffffffff8130e32b>] blk_done_softirq+0x7b/0x90
      [  638.957116]  [<ffffffff8104fddd>] __do_softirq+0xfd/0x330
      [  638.958987]  [<ffffffff810b964f>] ? __lock_release+0x6f/0x100
      [  638.960861]  [<ffffffff8174a5cc>] call_softirq+0x1c/0x30
      [  638.962724]  [<ffffffff81004c7d>] do_softirq+0x8d/0xc0
      [  638.964565]  [<ffffffff8105024e>] irq_exit+0x10e/0x150
      [  638.966390]  [<ffffffff8174ad4a>] smp_apic_timer_interrupt+0x4a/0x60
      [  638.968223]  [<ffffffff817499af>] apic_timer_interrupt+0x6f/0x80
      [  638.970079]  <EOI>  [<ffffffff810b964f>] ? __lock_release+0x6f/0x100
      [  638.971899]  [<ffffffff8173fa6a>] ? _raw_spin_unlock_irq+0x3a/0x50
      [  638.973691]  [<ffffffff8173fa60>] ? _raw_spin_unlock_irq+0x30/0x50
      [  638.975475]  [<ffffffff81562393>] md_set_badblocks+0x1f3/0x4a0
      [  638.977243]  [<ffffffff81566e07>] rdev_set_badblocks+0x27/0x80
      [  638.978988]  [<ffffffffa00d97bb>] raid5_end_read_request+0x36b/0x4e0 [raid456]
      [  638.980723]  [<ffffffff811b5a1d>] bio_endio+0x1d/0x40
      [  638.982463]  [<ffffffff81304ff3>] req_bio_endio.isra.65+0x83/0xa0
      [  638.984214]  [<ffffffff81306b9f>] blk_update_request+0x7f/0x350
      [  638.985967]  [<ffffffff81306ea1>] blk_update_bidi_request+0x31/0x90
      [  638.987710]  [<ffffffff813085e0>] __blk_end_bidi_request+0x20/0x50
      [  638.989439]  [<ffffffff8130862f>] __blk_end_request_all+0x1f/0x30
      [  638.991149]  [<ffffffff81308746>] blk_peek_request+0x106/0x250
      [  638.992861]  [<ffffffff814a62a9>] ? scsi_kill_request.isra.32+0xe9/0x130
      [  638.994561]  [<ffffffff814a633a>] scsi_request_fn+0x4a/0x3d0
      [  638.996251]  [<ffffffff813040a7>] __blk_run_queue+0x37/0x50
      [  638.997900]  [<ffffffff813045af>] blk_run_queue+0x2f/0x50
      [  638.999553]  [<ffffffff814a5750>] scsi_run_queue+0xe0/0x1c0
      [  639.001185]  [<ffffffff814a7721>] scsi_run_host_queues+0x21/0x40
      [  639.002798]  [<ffffffff814a2e87>] scsi_restart_operations+0x177/0x200
      [  639.004391]  [<ffffffff814a4fe9>] scsi_error_handler+0xc9/0xe0
      [  639.005996]  [<ffffffff814a4f20>] ? scsi_unjam_host+0xd0/0xd0
      [  639.007600]  [<ffffffff81072f6b>] kthread+0xdb/0xe0
      [  639.009205]  [<ffffffff81072e90>] ? flush_kthread_worker+0x170/0x170
      [  639.010821]  [<ffffffff81748cac>] ret_from_fork+0x7c/0xb0
      [  639.012437]  [<ffffffff81072e90>] ? flush_kthread_worker+0x170/0x170
      
      This bug was introduce in commit  2e8ac303
      (the first time rdev_set_badblock was call from interrupt context),
      so this patch is appropriate for 3.5 and subsequent kernels.
      Signed-off-by: default avatarBian Yu <bianyu@kedacom.com>
      Reviewed-by: default avatarJianpeng Ma <majianpeng@gmail.com>
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      04654966
    • Shaohua Li's avatar
      raid5: avoid finding "discard" stripe · 01e608d7
      Shaohua Li authored
      commit d47648fcf0611812286f68131b40251c6fa54f5e upstream.
      
      SCSI discard will damage discard stripe bio setting, eg, some fields are
      changed. If the stripe is reused very soon, we have wrong bios setting. We
      remove discard stripe from hash list, so next time the strip will be fully
      initialized.
      
      Suitable for backport to 3.7+.
      Signed-off-by: default avatarShaohua Li <shli@fusionio.com>
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      01e608d7
    • Shaohua Li's avatar
      raid5: set bio bi_vcnt 0 for discard request · 7e44a926
      Shaohua Li authored
      commit 37c61ff31e9b5e3fcf3cc6579f5c68f6ad40c4b1 upstream.
      
      SCSI layer will add new payload for discard request. If two bios are merged
      to one, the second bio has bi_vcnt 1 which is set in raid5. This will confuse
      SCSI and cause oops.
      
      Suitable for backport to 3.7+
      Reported-by: default avatarJes Sorensen <Jes.Sorensen@redhat.com>
      Signed-off-by: default avatarShaohua Li <shli@fusionio.com>
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      Acked-by: default avatarMartin K. Petersen <martin.petersen@oracle.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      7e44a926
    • Kent Overstreet's avatar
      bcache: Fixed incorrect order of arguments to bio_alloc_bioset() · 955a23e1
      Kent Overstreet authored
      commit d4eddd42f592a0cf06818fae694a3d271f842e4d upstream.
      Signed-off-by: default avatarKent Overstreet <kmo@daterainc.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      955a23e1
  8. 04 Nov, 2013 1 commit
    • Mikulas Patocka's avatar
      dm snapshot: fix data corruption · 2d99b6dd
      Mikulas Patocka authored
      commit e9c6a182649f4259db704ae15a91ac820e63b0ca upstream.
      
      This patch fixes a particular type of data corruption that has been
      encountered when loading a snapshot's metadata from disk.
      
      When we allocate a new chunk in persistent_prepare, we increment
      ps->next_free and we make sure that it doesn't point to a metadata area
      by further incrementing it if necessary.
      
      When we load metadata from disk on device activation, ps->next_free is
      positioned after the last used data chunk. However, if this last used
      data chunk is followed by a metadata area, ps->next_free is positioned
      erroneously to the metadata area. A newly-allocated chunk is placed at
      the same location as the metadata area, resulting in data or metadata
      corruption.
      
      This patch changes the code so that ps->next_free skips the metadata
      area when metadata are loaded in function read_exceptions.
      
      The patch also moves a piece of code from persistent_prepare_exception
      to a separate function skip_metadata to avoid code duplication.
      
      CVE-2013-4299
      Signed-off-by: default avatarMikulas Patocka <mpatocka@redhat.com>
      Cc: Mike Snitzer <snitzer@redhat.com>
      Signed-off-by: default avatarAlasdair G Kergon <agk@redhat.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      2d99b6dd
  9. 13 Oct, 2013 1 commit
  10. 05 Oct, 2013 6 commits
    • NeilBrown's avatar
      dm-raid: silence compiler warning on rebuilds_per_group. · f38af5d3
      NeilBrown authored
      commit 3f6bbd3ffd7b733dd705e494663e5761aa2cb9c1 upstream.
      
      This doesn't really need to be initialised, but it doesn't hurt,
      silences the compiler, and as it is a counter it makes sense for it to
      start at zero.
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      f38af5d3
    • Mike Snitzer's avatar
      dm mpath: disable WRITE SAME if it fails · e9d60f69
      Mike Snitzer authored
      commit f84cb8a46a771f36a04a02c61ea635c968ed5f6a upstream.
      
      Workaround the SCSI layer's problematic WRITE SAME heuristics by
      disabling WRITE SAME in the DM multipath device's queue_limits if an
      underlying device disabled it.
      
      The WRITE SAME heuristics, with both the original commit 5db44863
      ("[SCSI] sd: Implement support for WRITE SAME") and the updated commit
      66c28f971 ("[SCSI] sd: Update WRITE SAME heuristics"), default to enabling
      WRITE SAME(10) even without successfully determining it is supported.
      After the first failed WRITE SAME the SCSI layer will disable WRITE SAME
      for the device (by setting sdkp->device->no_write_same which results in
      'max_write_same_sectors' in device's queue_limits to be set to 0).
      
      When a device is stacked ontop of such a SCSI device any changes to that
      SCSI device's queue_limits do not automatically propagate up the stack.
      As such, a DM multipath device will not have its WRITE SAME support
      disabled.  This causes the block layer to continue to issue WRITE SAME
      requests to the mpath device which causes paths to fail and (if mpath IO
      isn't configured to queue when no paths are available) it will result in
      actual IO errors to the upper layers.
      
      This fix doesn't help configurations that have additional devices
      stacked ontop of the mpath device (e.g. LVM created linear DM devices
      ontop).  A proper fix that restacks all the queue_limits from the bottom
      of the device stack up will need to be explored if SCSI will continue to
      use this model of optimistically allowing op codes and then disabling
      them after they fail for the first time.
      
      Before this patch:
      
      EXT4-fs (dm-6): mounted filesystem with ordered data mode. Opts: (null)
      device-mapper: multipath: XXX snitm debugging: got -EREMOTEIO (-121)
      device-mapper: multipath: XXX snitm debugging: failing WRITE SAME IO with error=-121
      end_request: critical target error, dev dm-6, sector 528
      dm-6: WRITE SAME failed. Manually zeroing.
      device-mapper: multipath: Failing path 8:112.
      end_request: I/O error, dev dm-6, sector 4616
      dm-6: WRITE SAME failed. Manually zeroing.
      end_request: I/O error, dev dm-6, sector 4616
      end_request: I/O error, dev dm-6, sector 5640
      end_request: I/O error, dev dm-6, sector 6664
      end_request: I/O error, dev dm-6, sector 7688
      end_request: I/O error, dev dm-6, sector 524288
      Buffer I/O error on device dm-6, logical block 65536
      lost page write due to I/O error on dm-6
      JBD2: Error -5 detected when updating journal superblock for dm-6-8.
      end_request: I/O error, dev dm-6, sector 524296
      Aborting journal on device dm-6-8.
      end_request: I/O error, dev dm-6, sector 524288
      Buffer I/O error on device dm-6, logical block 65536
      lost page write due to I/O error on dm-6
      JBD2: Error -5 detected when updating journal superblock for dm-6-8.
      
      # cat /sys/block/sdh/queue/write_same_max_bytes
      0
      # cat /sys/block/dm-6/queue/write_same_max_bytes
      33553920
      
      After this patch:
      
      EXT4-fs (dm-6): mounted filesystem with ordered data mode. Opts: (null)
      device-mapper: multipath: XXX snitm debugging: got -EREMOTEIO (-121)
      device-mapper: multipath: XXX snitm debugging: WRITE SAME I/O failed with error=-121
      end_request: critical target error, dev dm-6, sector 528
      dm-6: WRITE SAME failed. Manually zeroing.
      
      # cat /sys/block/sdh/queue/write_same_max_bytes
      0
      # cat /sys/block/dm-6/queue/write_same_max_bytes
      0
      
      It should be noted that WRITE SAME support wasn't enabled in DM
      multipath until v3.10.
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      Cc: Martin K. Petersen <martin.petersen@oracle.com>
      Cc: Hannes Reinecke <hare@suse.de>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      e9d60f69
    • Mikulas Patocka's avatar
      dm-snapshot: fix performance degradation due to small hash size · 0f64fad3
      Mikulas Patocka authored
      commit 60e356f381954d79088d0455e357db48cfdd6857 upstream.
      
      LVM2, since version 2.02.96, creates origin with zero size, then loads
      the snapshot driver and then loads the origin.  Consequently, the
      snapshot driver sees the origin size zero and sets the hash size to the
      lower bound 64.  Such small hash table causes performance degradation.
      
      This patch changes it so that the hash size is determined by the size of
      snapshot volume, not minimum of origin and snapshot size.  It doesn't
      make sense to set the snapshot size significantly larger than the origin
      size, so we do not need to take origin size into account when
      calculating the hash size.
      Signed-off-by: default avatarMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      0f64fad3
    • Mikulas Patocka's avatar
      dm snapshot: workaround for a false positive lockdep warning · 4541f4e3
      Mikulas Patocka authored
      commit 5ea330a75bd86b2b2a01d7b85c516983238306fb upstream.
      
      The kernel reports a lockdep warning if a snapshot is invalidated because
      it runs out of space.
      
      The lockdep warning was triggered by commit 0976dfc1
      ("workqueue: Catch more locking problems with flush_work()") in v3.5.
      
      The warning is false positive.  The real cause for the warning is that
      the lockdep engine treats different instances of md->lock as a single
      lock.
      
      This patch is a workaround - we use flush_workqueue instead of flush_work.
      This code path is not performance sensitive (it is called only on
      initialization or invalidation), thus it doesn't matter that we flush the
      whole workqueue.
      
      The real fix for the problem would be to teach the lockdep engine to treat
      different instances of md->lock as separate locks.
      Signed-off-by: default avatarMikulas Patocka <mpatocka@redhat.com>
      Acked-by: default avatarAlasdair G Kergon <agk@redhat.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      4541f4e3
    • Kent Overstreet's avatar
      bcache: Fix flushes in writeback mode · 30d0e795
      Kent Overstreet authored
      commit c0f04d88e46d14de51f4baebb6efafb7d59e9f96 upstream.
      
      In writeback mode, when we get a cache flush we need to make sure we
      issue a flush to the backing device.
      
      The code for sending down an extra flush was wrong - by cloning the bio
      we were probably getting flags that didn't make sense for a bare flush,
      and also the old code was firing for FUA bios, for which we don't need
      to send a flush to the backing device.
      
      This was causing data corruption somehow - the mechanism was never
      determined, but this patch fixes it for the users that were seeing it.
      Signed-off-by: default avatarKent Overstreet <kmo@daterainc.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      30d0e795
    • Kent Overstreet's avatar
      bcache: Fix for handling overlapping extents when reading in a btree node · df8b0d94
      Kent Overstreet authored
      commit 84786438ed17978d72eeced580ab757e4da8830b upstream.
      
      btree_sort_fixup() was overly clever, because it was trying to avoid
      pulling a key off the btree iterator in more than one place.
      
      This led to a really obscure bug where we'd break early from the loop in
      btree_sort_fixup() if the current key overlapped with keys in more than
      one older set, and the next key it overlapped with was zero size.
      Signed-off-by: default avatarKent Overstreet <kmo@daterainc.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      df8b0d94