1. 26 Mar, 2015 1 commit
  2. 30 Jan, 2015 1 commit
  3. 16 Jan, 2015 1 commit
  4. 12 Dec, 2013 1 commit
  5. 27 Sep, 2013 1 commit
  6. 29 Aug, 2013 2 commits
    • David Vrabel's avatar
      xen/events: mask events when changing their VCPU binding · 131cb95f
      David Vrabel authored
      commit 4704fe4f03a5ab27e3c36184af85d5000e0f8a48 upstream.
      
      When a event is being bound to a VCPU there is a window between the
      EVTCHNOP_bind_vpcu call and the adjustment of the local per-cpu masks
      where an event may be lost.  The hypervisor upcalls the new VCPU but
      the kernel thinks that event is still bound to the old VCPU and
      ignores it.
      
      There is even a problem when the event is being bound to the same VCPU
      as there is a small window beween the clear_bit() and set_bit() calls
      in bind_evtchn_to_cpu().  When scanning for pending events, the kernel
      may read the bit when it is momentarily clear and ignore the event.
      
      Avoid this by masking the event during the whole bind operation.
      Signed-off-by: default avatarDavid Vrabel <david.vrabel@citrix.com>
      Signed-off-by: default avatarKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Reviewed-by: default avatarJan Beulich <jbeulich@suse.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      131cb95f
    • David Vrabel's avatar
      xen/events: initialize local per-cpu mask for all possible events · d8251a94
      David Vrabel authored
      commit 84ca7a8e45dafb49cd5ca90a343ba033e2885c17 upstream.
      
      The sizeof() argument in init_evtchn_cpu_bindings() is incorrect
      resulting in only the first 64 (or 32 in 32-bit guests) ports having
      their bindings being initialized to VCPU 0.
      
      In most cases this does not cause a problem as request_irq() will set
      the irq affinity which will set the correct local per-cpu mask.
      However, if the request_irq() is called on a VCPU other than 0, there
      is a window between the unmasking of the event and the affinity being
      set were an event may be lost because it is not locally unmasked on
      any VCPU. If request_irq() is called on VCPU 0 then local irqs are
      disabled during the window and the race does not occur.
      
      Fix this by initializing all NR_EVENT_CHANNEL bits in the local
      per-cpu masks.
      Signed-off-by: default avatarDavid Vrabel <david.vrabel@citrix.com>
      Signed-off-by: default avatarKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      d8251a94
  7. 04 Aug, 2013 1 commit
    • David Vrabel's avatar
      xen/evtchn: avoid a deadlock when unbinding an event channel · 3d63d1e0
      David Vrabel authored
      commit 179fbd5a45f0d4034cc6fd37b8d367a3b79663c4 upstream.
      
      Unbinding an event channel (either with the ioctl or when the evtchn
      device is closed) may deadlock because disable_irq() is called with
      port_user_lock held which is also locked by the interrupt handler.
      
      Think of the IOCTL_EVTCHN_UNBIND is being serviced, the routine has
      just taken the lock, and an interrupt happens. The evtchn_interrupt
      is invoked, tries to take the lock and spins forever.
      
      A quick glance at the code shows that the spinlock is a local IRQ
      variant. Unfortunately that does not help as "disable_irq() waits for
      the interrupt handler on all CPUs to stop running.  If the irq occurs
      on another VCPU, it tries to take port_user_lock and can't because
      the unbind ioctl is holding it." (from David). Hence we cannot
      depend on the said spinlock to protect us. We could make it a system
      wide IRQ disable spinlock but there is a better way.
      
      We can piggyback on the fact that the existence of the spinlock is
      to make get_port_user() checks be up-to-date. And we can alter those
      checks to not depend on the spin lock (as it's protected by u->bind_mutex
      in the ioctl) and can remove the unnecessary locking (this is
      IOCTL_EVTCHN_UNBIND) path.
      
      In the interrupt handler we cannot use the mutex, but we do not
      need it.
      
      "The unbind disables the irq before making the port user stale, so when
      you clear it you are guaranteed that the interrupt handler that might
      use that port cannot be running." (from David).
      
      Hence this patch removes the spinlock usage on the teardown path
      and piggybacks on disable_irq happening before we muck with the
      get_port_user() data. This ensures that the interrupt handler will
      never run on stale data.
      Signed-off-by: default avatarDavid Vrabel <david.vrabel@citrix.com>
      Signed-off-by: default avatarKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      [v1: Expanded the commit description a bit]
      Signed-off-by: default avatarJonghwan Choi <jhbird.choi@samsung.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      3d63d1e0
  8. 10 Jun, 2013 1 commit
    • Konrad Rzeszutek Wilk's avatar
      xen/tmem: Don't over-write tmem_frontswap_poolid after tmem_frontswap_init set it. · b2c75c44
      Konrad Rzeszutek Wilk authored
      Commit 10a7a077 ("xen: tmem: enable Xen
      tmem shim to be built/loaded as a module") allows the tmem module
      to be loaded any time. For this work the frontswap API had to
      be able to asynchronously to call tmem_frontswap_init before
      or after the swap image had been set. That was added in git
      commit 905cd0e1
      ("mm: frontswap: lazy initialization to allow tmem backends to build/run as modules").
      
      Which means we could do this (The common case):
      
       modprobe tmem		[so calls frontswap_register_ops, no ->init]
      			 modifies tmem_frontswap_poolid = -1
       swapon /dev/xvda1	[__frontswap_init, calls -> init, tmem_frontswap_poolid is
      			 < 0 so tmem hypercall done]
      
      Or the failing one:
      
       swapon /dev/xvda1	[calls __frontswap_init, sets the need_init bitmap]
       modprobe tmem		[calls frontswap_register_ops, -->init calls, finds out
      			tmem_frontswap_poolid is 0, does not make a hypercall.
      			Later in the module_init, sets tmem_frontswap_poolid=-1]
      
      Which meant that in the failing case we would not call the hypercall
      to initialize the pool and never be able to make any frontswap
      backend calls.
      
      Moving the frontswap_register_ops after setting the tmem_frontswap_poolid
      fixes it.
      Signed-off-by: default avatarKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Reviewed-by: default avatarBob Liu <bob.liu@oracle.com>
      b2c75c44
  9. 29 May, 2013 4 commits
  10. 28 May, 2013 1 commit
  11. 20 May, 2013 3 commits
  12. 15 May, 2013 10 commits
  13. 08 May, 2013 2 commits
  14. 01 May, 2013 4 commits
  15. 19 Apr, 2013 1 commit
  16. 17 Apr, 2013 1 commit
  17. 16 Apr, 2013 2 commits
  18. 02 Apr, 2013 1 commit
  19. 27 Mar, 2013 2 commits
    • David Vrabel's avatar
      xen/events: avoid race with raising an event in unmask_evtchn() · c26377e6
      David Vrabel authored
      In unmask_evtchn(), when the mask bit is cleared after testing for
      pending and the event becomes pending between the test and clear, then
      the upcall will not become pending and the event may be lost or
      delayed.
      
      Avoid this by always clearing the mask bit before checking for
      pending.  If a hypercall is needed, remask the event as
      EVTCHNOP_unmask will only retrigger pending events if they were
      masked.
      
      This fixes a regression introduced in 3.7 by
      b5e57923 (xen/events: fix
      unmask_evtchn for PV on HVM guests) which reordered the clear mask and
      check pending operations.
      
      Changes in v2:
      - set mask before hypercall.
      
      Cc: stable@vger.kernel.org
      Acked-by: default avatarStefano Stabellini <stefano.stabellini@eu.citrix.com>
      Signed-off-by: default avatarDavid Vrabel <david.vrabel@citrix.com>
      Signed-off-by: default avatarKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      c26377e6
    • Konrad Rzeszutek Wilk's avatar
      xen/acpi-stub: Disable it b/c the acpi_processor_add is no longer called. · 76fc2537
      Konrad Rzeszutek Wilk authored
      With the Xen ACPI stub code (CONFIG_XEN_STUB=y) enabled, the power
      C and P states are no longer uploaded to the hypervisor.
      
      The reason is that the Xen CPU hotplug code: xen-acpi-cpuhotplug.c
      and the xen-acpi-stub.c register themselves as the "processor" type object.
      
      That means the generic processor (processor_driver.c) stops
      working and it does not call (acpi_processor_add) which populates the
      
               per_cpu(processors, pr->id) = pr;
      
      structure. The 'pr' is gathered from the acpi_processor_get_info function
      which does the job of finding the C-states and figuring out PBLK address.
      
      The 'processors->pr' is then later used by xen-acpi-processor.c (the one that
      uploads C and P states to the hypervisor). Since it is NULL, we end
      skip the gathering of _PSD, _PSS, _PCT, etc and never upload the power
      management data.
      
      The end result is that enabling the CONFIG_XEN_STUB in the build means that
      xen-acpi-processor is not working anymore.
      
      This temporary patch fixes it by marking the XEN_STUB driver as
      BROKEN until this can be properly fixed.
      
      CC: jinsong.liu@intel.com
      Signed-off-by: default avatarKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      76fc2537