diff options
author | Mike Pagano <mpagano@gentoo.org> | 2014-10-31 07:18:46 -0400 |
---|---|---|
committer | Mike Pagano <mpagano@gentoo.org> | 2014-10-31 07:18:46 -0400 |
commit | fdca1444333f450d874582ba2ad6532254eb6d4d (patch) | |
tree | f8c72aae7227ba46c29e5e34057f944b46cbdc4f | |
parent | Linux patch 3.10.58 (diff) | |
download | linux-patches-3.10-66.tar.gz linux-patches-3.10-66.tar.bz2 linux-patches-3.10-66.zip |
Linux patch 3.10.593.10-66
-rw-r--r-- | 0000_README | 4 | ||||
-rw-r--r-- | 1058_linux-3.10.59.patch | 1369 |
2 files changed, 1373 insertions, 0 deletions
diff --git a/0000_README b/0000_README index fb459ebf..580573b6 100644 --- a/0000_README +++ b/0000_README @@ -274,6 +274,10 @@ Patch: 1057_linux-3.10.58.patch From: http://www.kernel.org Desc: Linux 3.10.58 +Patch: 1058_linux-3.10.59.patch +From: http://www.kernel.org +Desc: Linux 3.10.59 + Patch: 1500_XATTR_USER_PREFIX.patch From: https://bugs.gentoo.org/show_bug.cgi?id=470644 Desc: Support for namespace user.pax.* on tmpfs. diff --git a/1058_linux-3.10.59.patch b/1058_linux-3.10.59.patch new file mode 100644 index 00000000..8bddbfcf --- /dev/null +++ b/1058_linux-3.10.59.patch @@ -0,0 +1,1369 @@ +diff --git a/Documentation/lzo.txt b/Documentation/lzo.txt +new file mode 100644 +index 000000000000..ea45dd3901e3 +--- /dev/null ++++ b/Documentation/lzo.txt +@@ -0,0 +1,164 @@ ++ ++LZO stream format as understood by Linux's LZO decompressor ++=========================================================== ++ ++Introduction ++ ++ This is not a specification. No specification seems to be publicly available ++ for the LZO stream format. This document describes what input format the LZO ++ decompressor as implemented in the Linux kernel understands. The file subject ++ of this analysis is lib/lzo/lzo1x_decompress_safe.c. No analysis was made on ++ the compressor nor on any other implementations though it seems likely that ++ the format matches the standard one. The purpose of this document is to ++ better understand what the code does in order to propose more efficient fixes ++ for future bug reports. ++ ++Description ++ ++ The stream is composed of a series of instructions, operands, and data. The ++ instructions consist in a few bits representing an opcode, and bits forming ++ the operands for the instruction, whose size and position depend on the ++ opcode and on the number of literals copied by previous instruction. The ++ operands are used to indicate : ++ ++ - a distance when copying data from the dictionary (past output buffer) ++ - a length (number of bytes to copy from dictionary) ++ - the number of literals to copy, which is retained in variable "state" ++ as a piece of information for next instructions. ++ ++ Optionally depending on the opcode and operands, extra data may follow. These ++ extra data can be a complement for the operand (eg: a length or a distance ++ encoded on larger values), or a literal to be copied to the output buffer. ++ ++ The first byte of the block follows a different encoding from other bytes, it ++ seems to be optimized for literal use only, since there is no dictionary yet ++ prior to that byte. ++ ++ Lengths are always encoded on a variable size starting with a small number ++ of bits in the operand. If the number of bits isn't enough to represent the ++ length, up to 255 may be added in increments by consuming more bytes with a ++ rate of at most 255 per extra byte (thus the compression ratio cannot exceed ++ around 255:1). The variable length encoding using #bits is always the same : ++ ++ length = byte & ((1 << #bits) - 1) ++ if (!length) { ++ length = ((1 << #bits) - 1) ++ length += 255*(number of zero bytes) ++ length += first-non-zero-byte ++ } ++ length += constant (generally 2 or 3) ++ ++ For references to the dictionary, distances are relative to the output ++ pointer. Distances are encoded using very few bits belonging to certain ++ ranges, resulting in multiple copy instructions using different encodings. ++ Certain encodings involve one extra byte, others involve two extra bytes ++ forming a little-endian 16-bit quantity (marked LE16 below). ++ ++ After any instruction except the large literal copy, 0, 1, 2 or 3 literals ++ are copied before starting the next instruction. The number of literals that ++ were copied may change the meaning and behaviour of the next instruction. In ++ practice, only one instruction needs to know whether 0, less than 4, or more ++ literals were copied. This is the information stored in the <state> variable ++ in this implementation. This number of immediate literals to be copied is ++ generally encoded in the last two bits of the instruction but may also be ++ taken from the last two bits of an extra operand (eg: distance). ++ ++ End of stream is declared when a block copy of distance 0 is seen. Only one ++ instruction may encode this distance (0001HLLL), it takes one LE16 operand ++ for the distance, thus requiring 3 bytes. ++ ++ IMPORTANT NOTE : in the code some length checks are missing because certain ++ instructions are called under the assumption that a certain number of bytes ++ follow because it has already been garanteed before parsing the instructions. ++ They just have to "refill" this credit if they consume extra bytes. This is ++ an implementation design choice independant on the algorithm or encoding. ++ ++Byte sequences ++ ++ First byte encoding : ++ ++ 0..17 : follow regular instruction encoding, see below. It is worth ++ noting that codes 16 and 17 will represent a block copy from ++ the dictionary which is empty, and that they will always be ++ invalid at this place. ++ ++ 18..21 : copy 0..3 literals ++ state = (byte - 17) = 0..3 [ copy <state> literals ] ++ skip byte ++ ++ 22..255 : copy literal string ++ length = (byte - 17) = 4..238 ++ state = 4 [ don't copy extra literals ] ++ skip byte ++ ++ Instruction encoding : ++ ++ 0 0 0 0 X X X X (0..15) ++ Depends on the number of literals copied by the last instruction. ++ If last instruction did not copy any literal (state == 0), this ++ encoding will be a copy of 4 or more literal, and must be interpreted ++ like this : ++ ++ 0 0 0 0 L L L L (0..15) : copy long literal string ++ length = 3 + (L ?: 15 + (zero_bytes * 255) + non_zero_byte) ++ state = 4 (no extra literals are copied) ++ ++ If last instruction used to copy between 1 to 3 literals (encoded in ++ the instruction's opcode or distance), the instruction is a copy of a ++ 2-byte block from the dictionary within a 1kB distance. It is worth ++ noting that this instruction provides little savings since it uses 2 ++ bytes to encode a copy of 2 other bytes but it encodes the number of ++ following literals for free. It must be interpreted like this : ++ ++ 0 0 0 0 D D S S (0..15) : copy 2 bytes from <= 1kB distance ++ length = 2 ++ state = S (copy S literals after this block) ++ Always followed by exactly one byte : H H H H H H H H ++ distance = (H << 2) + D + 1 ++ ++ If last instruction used to copy 4 or more literals (as detected by ++ state == 4), the instruction becomes a copy of a 3-byte block from the ++ dictionary from a 2..3kB distance, and must be interpreted like this : ++ ++ 0 0 0 0 D D S S (0..15) : copy 3 bytes from 2..3 kB distance ++ length = 3 ++ state = S (copy S literals after this block) ++ Always followed by exactly one byte : H H H H H H H H ++ distance = (H << 2) + D + 2049 ++ ++ 0 0 0 1 H L L L (16..31) ++ Copy of a block within 16..48kB distance (preferably less than 10B) ++ length = 2 + (L ?: 7 + (zero_bytes * 255) + non_zero_byte) ++ Always followed by exactly one LE16 : D D D D D D D D : D D D D D D S S ++ distance = 16384 + (H << 14) + D ++ state = S (copy S literals after this block) ++ End of stream is reached if distance == 16384 ++ ++ 0 0 1 L L L L L (32..63) ++ Copy of small block within 16kB distance (preferably less than 34B) ++ length = 2 + (L ?: 31 + (zero_bytes * 255) + non_zero_byte) ++ Always followed by exactly one LE16 : D D D D D D D D : D D D D D D S S ++ distance = D + 1 ++ state = S (copy S literals after this block) ++ ++ 0 1 L D D D S S (64..127) ++ Copy 3-4 bytes from block within 2kB distance ++ state = S (copy S literals after this block) ++ length = 3 + L ++ Always followed by exactly one byte : H H H H H H H H ++ distance = (H << 3) + D + 1 ++ ++ 1 L L D D D S S (128..255) ++ Copy 5-8 bytes from block within 2kB distance ++ state = S (copy S literals after this block) ++ length = 5 + L ++ Always followed by exactly one byte : H H H H H H H H ++ distance = (H << 3) + D + 1 ++ ++Authors ++ ++ This document was written by Willy Tarreau <w@1wt.eu> on 2014/07/19 during an ++ analysis of the decompression code available in Linux 3.16-rc5. The code is ++ tricky, it is possible that this document contains mistakes or that a few ++ corner cases were overlooked. In any case, please report any doubt, fix, or ++ proposed updates to the author(s) so that the document can be updated. +diff --git a/Makefile b/Makefile +index c27454b8ca3e..7baf27f5cf0f 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,6 +1,6 @@ + VERSION = 3 + PATCHLEVEL = 10 +-SUBLEVEL = 58 ++SUBLEVEL = 59 + EXTRAVERSION = + NAME = TOSSUG Baby Fish + +diff --git a/arch/arm/mach-at91/clock.c b/arch/arm/mach-at91/clock.c +index da841885d01c..64f9f1045539 100644 +--- a/arch/arm/mach-at91/clock.c ++++ b/arch/arm/mach-at91/clock.c +@@ -947,6 +947,7 @@ static int __init at91_clock_reset(void) + } + + at91_pmc_write(AT91_PMC_SCDR, scdr); ++ at91_pmc_write(AT91_PMC_PCDR, pcdr); + if (cpu_is_sama5d3()) + at91_pmc_write(AT91_PMC_PCDR1, pcdr1); + +diff --git a/arch/arm64/include/asm/compat.h b/arch/arm64/include/asm/compat.h +index 899af807ef0f..c30a548cee56 100644 +--- a/arch/arm64/include/asm/compat.h ++++ b/arch/arm64/include/asm/compat.h +@@ -33,8 +33,8 @@ typedef s32 compat_ssize_t; + typedef s32 compat_time_t; + typedef s32 compat_clock_t; + typedef s32 compat_pid_t; +-typedef u32 __compat_uid_t; +-typedef u32 __compat_gid_t; ++typedef u16 __compat_uid_t; ++typedef u16 __compat_gid_t; + typedef u16 __compat_uid16_t; + typedef u16 __compat_gid16_t; + typedef u32 __compat_uid32_t; +diff --git a/arch/m68k/mm/hwtest.c b/arch/m68k/mm/hwtest.c +index 2c7dde3c6430..2a5259fd23eb 100644 +--- a/arch/m68k/mm/hwtest.c ++++ b/arch/m68k/mm/hwtest.c +@@ -28,9 +28,11 @@ + int hwreg_present( volatile void *regp ) + { + int ret = 0; ++ unsigned long flags; + long save_sp, save_vbr; + long tmp_vectors[3]; + ++ local_irq_save(flags); + __asm__ __volatile__ + ( "movec %/vbr,%2\n\t" + "movel #Lberr1,%4@(8)\n\t" +@@ -46,6 +48,7 @@ int hwreg_present( volatile void *regp ) + : "=&d" (ret), "=&r" (save_sp), "=&r" (save_vbr) + : "a" (regp), "a" (tmp_vectors) + ); ++ local_irq_restore(flags); + + return( ret ); + } +@@ -58,9 +61,11 @@ EXPORT_SYMBOL(hwreg_present); + int hwreg_write( volatile void *regp, unsigned short val ) + { + int ret; ++ unsigned long flags; + long save_sp, save_vbr; + long tmp_vectors[3]; + ++ local_irq_save(flags); + __asm__ __volatile__ + ( "movec %/vbr,%2\n\t" + "movel #Lberr2,%4@(8)\n\t" +@@ -78,6 +83,7 @@ int hwreg_write( volatile void *regp, unsigned short val ) + : "=&d" (ret), "=&r" (save_sp), "=&r" (save_vbr) + : "a" (regp), "a" (tmp_vectors), "g" (val) + ); ++ local_irq_restore(flags); + + return( ret ); + } +diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c +index 5c948177529e..bc79ab00536f 100644 +--- a/arch/s390/kvm/interrupt.c ++++ b/arch/s390/kvm/interrupt.c +@@ -71,6 +71,7 @@ static int __interrupt_is_deliverable(struct kvm_vcpu *vcpu, + return 0; + if (vcpu->arch.sie_block->gcr[0] & 0x2000ul) + return 1; ++ return 0; + case KVM_S390_INT_EMERGENCY: + if (psw_extint_disabled(vcpu)) + return 0; +diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h +index f7f20f7fac3c..373058c9b75d 100644 +--- a/arch/x86/include/asm/kvm_host.h ++++ b/arch/x86/include/asm/kvm_host.h +@@ -463,6 +463,7 @@ struct kvm_vcpu_arch { + u64 mmio_gva; + unsigned access; + gfn_t mmio_gfn; ++ u64 mmio_gen; + + struct kvm_pmu pmu; + +diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c +index f187806dfc18..8533e69d2b89 100644 +--- a/arch/x86/kernel/cpu/intel.c ++++ b/arch/x86/kernel/cpu/intel.c +@@ -154,6 +154,21 @@ static void __cpuinit early_init_intel(struct cpuinfo_x86 *c) + setup_clear_cpu_cap(X86_FEATURE_ERMS); + } + } ++ ++ /* ++ * Intel Quark Core DevMan_001.pdf section 6.4.11 ++ * "The operating system also is required to invalidate (i.e., flush) ++ * the TLB when any changes are made to any of the page table entries. ++ * The operating system must reload CR3 to cause the TLB to be flushed" ++ * ++ * As a result cpu_has_pge() in arch/x86/include/asm/tlbflush.h should ++ * be false so that __flush_tlb_all() causes CR3 insted of CR4.PGE ++ * to be modified ++ */ ++ if (c->x86 == 5 && c->x86_model == 9) { ++ pr_info("Disabling PGE capability bit\n"); ++ setup_clear_cpu_cap(X86_FEATURE_PGE); ++ } + } + + #ifdef CONFIG_X86_32 +diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c +index 711c649f80b7..e14b1f8667bb 100644 +--- a/arch/x86/kvm/mmu.c ++++ b/arch/x86/kvm/mmu.c +@@ -3072,7 +3072,7 @@ static void mmu_sync_roots(struct kvm_vcpu *vcpu) + if (!VALID_PAGE(vcpu->arch.mmu.root_hpa)) + return; + +- vcpu_clear_mmio_info(vcpu, ~0ul); ++ vcpu_clear_mmio_info(vcpu, MMIO_GVA_ANY); + kvm_mmu_audit(vcpu, AUDIT_PRE_SYNC); + if (vcpu->arch.mmu.root_level == PT64_ROOT_LEVEL) { + hpa_t root = vcpu->arch.mmu.root_hpa; +diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h +index 3186542f2fa3..7626d3efa064 100644 +--- a/arch/x86/kvm/x86.h ++++ b/arch/x86/kvm/x86.h +@@ -78,15 +78,23 @@ static inline void vcpu_cache_mmio_info(struct kvm_vcpu *vcpu, + vcpu->arch.mmio_gva = gva & PAGE_MASK; + vcpu->arch.access = access; + vcpu->arch.mmio_gfn = gfn; ++ vcpu->arch.mmio_gen = kvm_memslots(vcpu->kvm)->generation; ++} ++ ++static inline bool vcpu_match_mmio_gen(struct kvm_vcpu *vcpu) ++{ ++ return vcpu->arch.mmio_gen == kvm_memslots(vcpu->kvm)->generation; + } + + /* +- * Clear the mmio cache info for the given gva, +- * specially, if gva is ~0ul, we clear all mmio cache info. ++ * Clear the mmio cache info for the given gva. If gva is MMIO_GVA_ANY, we ++ * clear all mmio cache info. + */ ++#define MMIO_GVA_ANY (~(gva_t)0) ++ + static inline void vcpu_clear_mmio_info(struct kvm_vcpu *vcpu, gva_t gva) + { +- if (gva != (~0ul) && vcpu->arch.mmio_gva != (gva & PAGE_MASK)) ++ if (gva != MMIO_GVA_ANY && vcpu->arch.mmio_gva != (gva & PAGE_MASK)) + return; + + vcpu->arch.mmio_gva = 0; +@@ -94,7 +102,8 @@ static inline void vcpu_clear_mmio_info(struct kvm_vcpu *vcpu, gva_t gva) + + static inline bool vcpu_match_mmio_gva(struct kvm_vcpu *vcpu, unsigned long gva) + { +- if (vcpu->arch.mmio_gva && vcpu->arch.mmio_gva == (gva & PAGE_MASK)) ++ if (vcpu_match_mmio_gen(vcpu) && vcpu->arch.mmio_gva && ++ vcpu->arch.mmio_gva == (gva & PAGE_MASK)) + return true; + + return false; +@@ -102,7 +111,8 @@ static inline bool vcpu_match_mmio_gva(struct kvm_vcpu *vcpu, unsigned long gva) + + static inline bool vcpu_match_mmio_gpa(struct kvm_vcpu *vcpu, gpa_t gpa) + { +- if (vcpu->arch.mmio_gfn && vcpu->arch.mmio_gfn == gpa >> PAGE_SHIFT) ++ if (vcpu_match_mmio_gen(vcpu) && vcpu->arch.mmio_gfn && ++ vcpu->arch.mmio_gfn == gpa >> PAGE_SHIFT) + return true; + + return false; +diff --git a/drivers/base/firmware_class.c b/drivers/base/firmware_class.c +index 01e21037d8fe..00a565676583 100644 +--- a/drivers/base/firmware_class.c ++++ b/drivers/base/firmware_class.c +@@ -1021,6 +1021,9 @@ _request_firmware(const struct firmware **firmware_p, const char *name, + if (!firmware_p) + return -EINVAL; + ++ if (!name || name[0] == '\0') ++ return -EINVAL; ++ + ret = _request_firmware_prepare(&fw, name, device); + if (ret <= 0) /* error or already assigned */ + goto out; +diff --git a/drivers/base/regmap/regmap-debugfs.c b/drivers/base/regmap/regmap-debugfs.c +index 975719bc3450..b41994fd8460 100644 +--- a/drivers/base/regmap/regmap-debugfs.c ++++ b/drivers/base/regmap/regmap-debugfs.c +@@ -460,16 +460,20 @@ void regmap_debugfs_init(struct regmap *map, const char *name) + { + struct rb_node *next; + struct regmap_range_node *range_node; ++ const char *devname = "dummy"; + + INIT_LIST_HEAD(&map->debugfs_off_cache); + mutex_init(&map->cache_lock); + ++ if (map->dev) ++ devname = dev_name(map->dev); ++ + if (name) { + map->debugfs_name = kasprintf(GFP_KERNEL, "%s-%s", +- dev_name(map->dev), name); ++ devname, name); + name = map->debugfs_name; + } else { +- name = dev_name(map->dev); ++ name = devname; + } + + map->debugfs = debugfs_create_dir(name, regmap_debugfs_root); +diff --git a/drivers/base/regmap/regmap.c b/drivers/base/regmap/regmap.c +index 4b5cf2e34e9a..6a66f0b7d3d4 100644 +--- a/drivers/base/regmap/regmap.c ++++ b/drivers/base/regmap/regmap.c +@@ -1177,7 +1177,7 @@ int _regmap_write(struct regmap *map, unsigned int reg, + } + + #ifdef LOG_DEVICE +- if (strcmp(dev_name(map->dev), LOG_DEVICE) == 0) ++ if (map->dev && strcmp(dev_name(map->dev), LOG_DEVICE) == 0) + dev_info(map->dev, "%x <= %x\n", reg, val); + #endif + +@@ -1437,7 +1437,7 @@ static int _regmap_read(struct regmap *map, unsigned int reg, + ret = map->reg_read(context, reg, val); + if (ret == 0) { + #ifdef LOG_DEVICE +- if (strcmp(dev_name(map->dev), LOG_DEVICE) == 0) ++ if (map->dev && strcmp(dev_name(map->dev), LOG_DEVICE) == 0) + dev_info(map->dev, "%x => %x\n", reg, *val); + #endif + +diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c +index 45aa8e760124..61a8ec4e5f4d 100644 +--- a/drivers/bluetooth/btusb.c ++++ b/drivers/bluetooth/btusb.c +@@ -302,6 +302,9 @@ static void btusb_intr_complete(struct urb *urb) + BT_ERR("%s corrupted event packet", hdev->name); + hdev->stat.err_rx++; + } ++ } else if (urb->status == -ENOENT) { ++ /* Avoid suspend failed when usb_kill_urb */ ++ return; + } + + if (!test_bit(BTUSB_INTR_RUNNING, &data->flags)) +@@ -390,6 +393,9 @@ static void btusb_bulk_complete(struct urb *urb) + BT_ERR("%s corrupted ACL packet", hdev->name); + hdev->stat.err_rx++; + } ++ } else if (urb->status == -ENOENT) { ++ /* Avoid suspend failed when usb_kill_urb */ ++ return; + } + + if (!test_bit(BTUSB_BULK_RUNNING, &data->flags)) +@@ -484,6 +490,9 @@ static void btusb_isoc_complete(struct urb *urb) + hdev->stat.err_rx++; + } + } ++ } else if (urb->status == -ENOENT) { ++ /* Avoid suspend failed when usb_kill_urb */ ++ return; + } + + if (!test_bit(BTUSB_ISOC_RUNNING, &data->flags)) +diff --git a/drivers/bluetooth/hci_h5.c b/drivers/bluetooth/hci_h5.c +index db0be2fb05fe..db35c542eb20 100644 +--- a/drivers/bluetooth/hci_h5.c ++++ b/drivers/bluetooth/hci_h5.c +@@ -237,7 +237,7 @@ static void h5_pkt_cull(struct h5 *h5) + break; + + to_remove--; +- seq = (seq - 1) % 8; ++ seq = (seq - 1) & 0x07; + } + + if (seq != h5->rx_ack) +diff --git a/drivers/hv/channel.c b/drivers/hv/channel.c +index 0b122f8c7005..92f34de7aee9 100644 +--- a/drivers/hv/channel.c ++++ b/drivers/hv/channel.c +@@ -199,8 +199,10 @@ int vmbus_open(struct vmbus_channel *newchannel, u32 send_ringbuffer_size, + ret = vmbus_post_msg(open_msg, + sizeof(struct vmbus_channel_open_channel)); + +- if (ret != 0) ++ if (ret != 0) { ++ err = ret; + goto error1; ++ } + + t = wait_for_completion_timeout(&open_info->waitevent, 5*HZ); + if (t == 0) { +@@ -392,7 +394,6 @@ int vmbus_establish_gpadl(struct vmbus_channel *channel, void *kbuffer, + u32 next_gpadl_handle; + unsigned long flags; + int ret = 0; +- int t; + + next_gpadl_handle = atomic_read(&vmbus_connection.next_gpadl_handle); + atomic_inc(&vmbus_connection.next_gpadl_handle); +@@ -439,9 +440,7 @@ int vmbus_establish_gpadl(struct vmbus_channel *channel, void *kbuffer, + + } + } +- t = wait_for_completion_timeout(&msginfo->waitevent, 5*HZ); +- BUG_ON(t == 0); +- ++ wait_for_completion(&msginfo->waitevent); + + /* At this point, we received the gpadl created msg */ + *gpadl_handle = gpadlmsg->gpadl; +@@ -464,7 +463,7 @@ int vmbus_teardown_gpadl(struct vmbus_channel *channel, u32 gpadl_handle) + struct vmbus_channel_gpadl_teardown *msg; + struct vmbus_channel_msginfo *info; + unsigned long flags; +- int ret, t; ++ int ret; + + info = kmalloc(sizeof(*info) + + sizeof(struct vmbus_channel_gpadl_teardown), GFP_KERNEL); +@@ -486,11 +485,12 @@ int vmbus_teardown_gpadl(struct vmbus_channel *channel, u32 gpadl_handle) + ret = vmbus_post_msg(msg, + sizeof(struct vmbus_channel_gpadl_teardown)); + +- BUG_ON(ret != 0); +- t = wait_for_completion_timeout(&info->waitevent, 5*HZ); +- BUG_ON(t == 0); ++ if (ret) ++ goto post_msg_err; ++ ++ wait_for_completion(&info->waitevent); + +- /* Received a torndown response */ ++post_msg_err: + spin_lock_irqsave(&vmbus_connection.channelmsg_lock, flags); + list_del(&info->msglistentry); + spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags); +diff --git a/drivers/hv/connection.c b/drivers/hv/connection.c +index b9f5d295cbec..a3b555808768 100644 +--- a/drivers/hv/connection.c ++++ b/drivers/hv/connection.c +@@ -393,10 +393,21 @@ int vmbus_post_msg(void *buffer, size_t buflen) + * insufficient resources. Retry the operation a couple of + * times before giving up. + */ +- while (retries < 3) { +- ret = hv_post_message(conn_id, 1, buffer, buflen); +- if (ret != HV_STATUS_INSUFFICIENT_BUFFERS) ++ while (retries < 10) { ++ ret = hv_post_message(conn_id, 1, buffer, buflen); ++ ++ switch (ret) { ++ case HV_STATUS_INSUFFICIENT_BUFFERS: ++ ret = -ENOMEM; ++ case -ENOMEM: ++ break; ++ case HV_STATUS_SUCCESS: + return ret; ++ default: ++ pr_err("hv_post_msg() failed; error code:%d\n", ret); ++ return -EINVAL; ++ } ++ + retries++; + msleep(100); + } +diff --git a/drivers/message/fusion/mptspi.c b/drivers/message/fusion/mptspi.c +index 5653e505f91f..424f51d1e2ce 100644 +--- a/drivers/message/fusion/mptspi.c ++++ b/drivers/message/fusion/mptspi.c +@@ -1422,6 +1422,11 @@ mptspi_probe(struct pci_dev *pdev, const struct pci_device_id *id) + goto out_mptspi_probe; + } + ++ /* VMWare emulation doesn't properly implement WRITE_SAME ++ */ ++ if (pdev->subsystem_vendor == 0x15AD) ++ sh->no_write_same = 1; ++ + spin_lock_irqsave(&ioc->FreeQlock, flags); + + /* Attach the SCSI Host to the IOC structure +diff --git a/drivers/net/wireless/iwlwifi/pcie/drv.c b/drivers/net/wireless/iwlwifi/pcie/drv.c +index b53e5c3f403b..bb020ad3f76c 100644 +--- a/drivers/net/wireless/iwlwifi/pcie/drv.c ++++ b/drivers/net/wireless/iwlwifi/pcie/drv.c +@@ -269,6 +269,8 @@ static DEFINE_PCI_DEVICE_TABLE(iwl_hw_card_ids) = { + {IWL_PCI_DEVICE(0x08B1, 0x4070, iwl7260_2ac_cfg)}, + {IWL_PCI_DEVICE(0x08B1, 0x4072, iwl7260_2ac_cfg)}, + {IWL_PCI_DEVICE(0x08B1, 0x4170, iwl7260_2ac_cfg)}, ++ {IWL_PCI_DEVICE(0x08B1, 0x4C60, iwl7260_2ac_cfg)}, ++ {IWL_PCI_DEVICE(0x08B1, 0x4C70, iwl7260_2ac_cfg)}, + {IWL_PCI_DEVICE(0x08B1, 0x4060, iwl7260_2n_cfg)}, + {IWL_PCI_DEVICE(0x08B1, 0x406A, iwl7260_2n_cfg)}, + {IWL_PCI_DEVICE(0x08B1, 0x4160, iwl7260_2n_cfg)}, +@@ -306,6 +308,8 @@ static DEFINE_PCI_DEVICE_TABLE(iwl_hw_card_ids) = { + {IWL_PCI_DEVICE(0x08B1, 0xC770, iwl7260_2ac_cfg)}, + {IWL_PCI_DEVICE(0x08B1, 0xC760, iwl7260_2n_cfg)}, + {IWL_PCI_DEVICE(0x08B2, 0xC270, iwl7260_2ac_cfg)}, ++ {IWL_PCI_DEVICE(0x08B1, 0xCC70, iwl7260_2ac_cfg)}, ++ {IWL_PCI_DEVICE(0x08B1, 0xCC60, iwl7260_2ac_cfg)}, + {IWL_PCI_DEVICE(0x08B2, 0xC272, iwl7260_2ac_cfg)}, + {IWL_PCI_DEVICE(0x08B2, 0xC260, iwl7260_2n_cfg)}, + {IWL_PCI_DEVICE(0x08B2, 0xC26A, iwl7260_n_cfg)}, +diff --git a/drivers/net/wireless/rt2x00/rt2800.h b/drivers/net/wireless/rt2x00/rt2800.h +index a7630d5ec892..a629313dd98a 100644 +--- a/drivers/net/wireless/rt2x00/rt2800.h ++++ b/drivers/net/wireless/rt2x00/rt2800.h +@@ -1920,7 +1920,7 @@ struct mac_iveiv_entry { + * 2 - drop tx power by 12dBm, + * 3 - increase tx power by 6dBm + */ +-#define BBP1_TX_POWER_CTRL FIELD8(0x07) ++#define BBP1_TX_POWER_CTRL FIELD8(0x03) + #define BBP1_TX_ANTENNA FIELD8(0x18) + + /* +diff --git a/drivers/pci/pci-sysfs.c b/drivers/pci/pci-sysfs.c +index 5b4a9d9cd200..689f3c87ee5c 100644 +--- a/drivers/pci/pci-sysfs.c ++++ b/drivers/pci/pci-sysfs.c +@@ -175,7 +175,7 @@ static ssize_t modalias_show(struct device *dev, struct device_attribute *attr, + { + struct pci_dev *pci_dev = to_pci_dev(dev); + +- return sprintf(buf, "pci:v%08Xd%08Xsv%08Xsd%08Xbc%02Xsc%02Xi%02x\n", ++ return sprintf(buf, "pci:v%08Xd%08Xsv%08Xsd%08Xbc%02Xsc%02Xi%02X\n", + pci_dev->vendor, pci_dev->device, + pci_dev->subsystem_vendor, pci_dev->subsystem_device, + (u8)(pci_dev->class >> 16), (u8)(pci_dev->class >> 8), +diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c +index 4510279e28dc..910339c0791f 100644 +--- a/drivers/pci/quirks.c ++++ b/drivers/pci/quirks.c +@@ -28,6 +28,7 @@ + #include <linux/ioport.h> + #include <linux/sched.h> + #include <linux/ktime.h> ++#include <linux/mm.h> + #include <asm/dma.h> /* isa_dma_bridge_buggy */ + #include "pci.h" + +@@ -291,6 +292,25 @@ static void quirk_citrine(struct pci_dev *dev) + } + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_CITRINE, quirk_citrine); + ++/* On IBM Crocodile ipr SAS adapters, expand BAR to system page size */ ++static void quirk_extend_bar_to_page(struct pci_dev *dev) ++{ ++ int i; ++ ++ for (i = 0; i < PCI_STD_RESOURCE_END; i++) { ++ struct resource *r = &dev->resource[i]; ++ ++ if (r->flags & IORESOURCE_MEM && resource_size(r) < PAGE_SIZE) { ++ r->end = PAGE_SIZE - 1; ++ r->start = 0; ++ r->flags |= IORESOURCE_UNSET; ++ dev_info(&dev->dev, "expanded BAR %d to page size: %pR\n", ++ i, r); ++ } ++ } ++} ++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_IBM, 0x034a, quirk_extend_bar_to_page); ++ + /* + * S3 868 and 968 chips report region size equal to 32M, but they decode 64M. + * If it's needed, re-allocate the region. +diff --git a/drivers/scsi/be2iscsi/be_mgmt.c b/drivers/scsi/be2iscsi/be_mgmt.c +index 245a9595a93a..ef0a78b0d730 100644 +--- a/drivers/scsi/be2iscsi/be_mgmt.c ++++ b/drivers/scsi/be2iscsi/be_mgmt.c +@@ -812,17 +812,20 @@ mgmt_static_ip_modify(struct beiscsi_hba *phba, + + if (ip_action == IP_ACTION_ADD) { + memcpy(req->ip_params.ip_record.ip_addr.addr, ip_param->value, +- ip_param->len); ++ sizeof(req->ip_params.ip_record.ip_addr.addr)); + + if (subnet_param) + memcpy(req->ip_params.ip_record.ip_addr.subnet_mask, +- subnet_param->value, subnet_param->len); ++ subnet_param->value, ++ sizeof(req->ip_params.ip_record.ip_addr.subnet_mask)); + } else { + memcpy(req->ip_params.ip_record.ip_addr.addr, +- if_info->ip_addr.addr, ip_param->len); ++ if_info->ip_addr.addr, ++ sizeof(req->ip_params.ip_record.ip_addr.addr)); + + memcpy(req->ip_params.ip_record.ip_addr.subnet_mask, +- if_info->ip_addr.subnet_mask, ip_param->len); ++ if_info->ip_addr.subnet_mask, ++ sizeof(req->ip_params.ip_record.ip_addr.subnet_mask)); + } + + rc = mgmt_exec_nonemb_cmd(phba, &nonemb_cmd, NULL, 0); +@@ -850,7 +853,7 @@ static int mgmt_modify_gateway(struct beiscsi_hba *phba, uint8_t *gt_addr, + req->action = gtway_action; + req->ip_addr.ip_type = BE2_IPV4; + +- memcpy(req->ip_addr.addr, gt_addr, param_len); ++ memcpy(req->ip_addr.addr, gt_addr, sizeof(req->ip_addr.addr)); + + return mgmt_exec_nonemb_cmd(phba, &nonemb_cmd, NULL, 0); + } +diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c +index f033b191a022..e6884940d107 100644 +--- a/drivers/scsi/qla2xxx/qla_target.c ++++ b/drivers/scsi/qla2xxx/qla_target.c +@@ -1514,12 +1514,10 @@ static inline void qlt_unmap_sg(struct scsi_qla_host *vha, + static int qlt_check_reserve_free_req(struct scsi_qla_host *vha, + uint32_t req_cnt) + { +- struct qla_hw_data *ha = vha->hw; +- device_reg_t __iomem *reg = ha->iobase; + uint32_t cnt; + + if (vha->req->cnt < (req_cnt + 2)) { +- cnt = (uint16_t)RD_REG_DWORD(®->isp24.req_q_out); ++ cnt = (uint16_t)RD_REG_DWORD(vha->req->req_q_out); + + ql_dbg(ql_dbg_tgt, vha, 0xe00a, + "Request ring circled: cnt=%d, vha->->ring_index=%d, " +diff --git a/drivers/spi/spi-dw-mid.c b/drivers/spi/spi-dw-mid.c +index b9f0192758d6..0791c92e8c50 100644 +--- a/drivers/spi/spi-dw-mid.c ++++ b/drivers/spi/spi-dw-mid.c +@@ -89,7 +89,13 @@ err_exit: + + static void mid_spi_dma_exit(struct dw_spi *dws) + { ++ if (!dws->dma_inited) ++ return; ++ ++ dmaengine_terminate_all(dws->txchan); + dma_release_channel(dws->txchan); ++ ++ dmaengine_terminate_all(dws->rxchan); + dma_release_channel(dws->rxchan); + } + +@@ -136,7 +142,7 @@ static int mid_spi_dma_transfer(struct dw_spi *dws, int cs_change) + txconf.dst_addr = dws->dma_addr; + txconf.dst_maxburst = LNW_DMA_MSIZE_16; + txconf.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES; +- txconf.dst_addr_width = DMA_SLAVE_BUSWIDTH_2_BYTES; ++ txconf.dst_addr_width = dws->dma_width; + txconf.device_fc = false; + + txchan->device->device_control(txchan, DMA_SLAVE_CONFIG, +@@ -159,7 +165,7 @@ static int mid_spi_dma_transfer(struct dw_spi *dws, int cs_change) + rxconf.src_addr = dws->dma_addr; + rxconf.src_maxburst = LNW_DMA_MSIZE_16; + rxconf.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES; +- rxconf.src_addr_width = DMA_SLAVE_BUSWIDTH_2_BYTES; ++ rxconf.src_addr_width = dws->dma_width; + rxconf.device_fc = false; + + rxchan->device->device_control(rxchan, DMA_SLAVE_CONFIG, +diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c +index 8fcd2424e7f9..187911fbabce 100644 +--- a/fs/btrfs/inode.c ++++ b/fs/btrfs/inode.c +@@ -3545,7 +3545,8 @@ noinline int btrfs_update_inode(struct btrfs_trans_handle *trans, + * without delay + */ + if (!btrfs_is_free_space_inode(inode) +- && root->root_key.objectid != BTRFS_DATA_RELOC_TREE_OBJECTID) { ++ && root->root_key.objectid != BTRFS_DATA_RELOC_TREE_OBJECTID ++ && !root->fs_info->log_root_recovering) { + btrfs_update_root_times(trans, root); + + ret = btrfs_delayed_update_inode(trans, root, inode); +diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c +index b3896d5f233a..0e7f7765b3bb 100644 +--- a/fs/btrfs/relocation.c ++++ b/fs/btrfs/relocation.c +@@ -967,8 +967,11 @@ again: + need_check = false; + list_add_tail(&edge->list[UPPER], + &list); +- } else ++ } else { ++ if (upper->checked) ++ need_check = true; + INIT_LIST_HEAD(&edge->list[UPPER]); ++ } + } else { + upper = rb_entry(rb_node, struct backref_node, + rb_node); +diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c +index 0544587d74f4..1f214689fa5e 100644 +--- a/fs/btrfs/transaction.c ++++ b/fs/btrfs/transaction.c +@@ -524,7 +524,6 @@ int btrfs_wait_for_commit(struct btrfs_root *root, u64 transid) + if (transid <= root->fs_info->last_trans_committed) + goto out; + +- ret = -EINVAL; + /* find specified transaction */ + spin_lock(&root->fs_info->trans_lock); + list_for_each_entry(t, &root->fs_info->trans_list, list) { +@@ -540,9 +539,16 @@ int btrfs_wait_for_commit(struct btrfs_root *root, u64 transid) + } + } + spin_unlock(&root->fs_info->trans_lock); +- /* The specified transaction doesn't exist */ +- if (!cur_trans) ++ ++ /* ++ * The specified transaction doesn't exist, or we ++ * raced with btrfs_commit_transaction ++ */ ++ if (!cur_trans) { ++ if (transid > root->fs_info->last_trans_committed) ++ ret = -EINVAL; + goto out; ++ } + } else { + /* find newest transaction that is committing | committed */ + spin_lock(&root->fs_info->trans_lock); +diff --git a/fs/ecryptfs/inode.c b/fs/ecryptfs/inode.c +index 5eab400e2590..41baf8b5e0eb 100644 +--- a/fs/ecryptfs/inode.c ++++ b/fs/ecryptfs/inode.c +@@ -1051,7 +1051,7 @@ ecryptfs_setxattr(struct dentry *dentry, const char *name, const void *value, + } + + rc = vfs_setxattr(lower_dentry, name, value, size, flags); +- if (!rc) ++ if (!rc && dentry->d_inode) + fsstack_copy_attr_all(dentry->d_inode, lower_dentry->d_inode); + out: + return rc; +diff --git a/fs/namespace.c b/fs/namespace.c +index 00409add4d96..7f6a9348c589 100644 +--- a/fs/namespace.c ++++ b/fs/namespace.c +@@ -1274,6 +1274,8 @@ static int do_umount(struct mount *mnt, int flags) + * Special case for "unmounting" root ... + * we just try to remount it readonly. + */ ++ if (!capable(CAP_SYS_ADMIN)) ++ return -EPERM; + down_write(&sb->s_umount); + if (!(sb->s_flags & MS_RDONLY)) + retval = do_remount_sb(sb, MS_RDONLY, NULL, 0); +diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c +index 3fc87b6f9def..69fc437be661 100644 +--- a/fs/nfs/nfs4proc.c ++++ b/fs/nfs/nfs4proc.c +@@ -6067,7 +6067,7 @@ static int nfs41_proc_async_sequence(struct nfs_client *clp, struct rpc_cred *cr + int ret = 0; + + if ((renew_flags & NFS4_RENEW_TIMEOUT) == 0) +- return 0; ++ return -EAGAIN; + task = _nfs41_proc_sequence(clp, cred, false); + if (IS_ERR(task)) + ret = PTR_ERR(task); +diff --git a/fs/nfs/nfs4renewd.c b/fs/nfs/nfs4renewd.c +index 1720d32ffa54..e1ba58c3d1ad 100644 +--- a/fs/nfs/nfs4renewd.c ++++ b/fs/nfs/nfs4renewd.c +@@ -88,10 +88,18 @@ nfs4_renew_state(struct work_struct *work) + } + nfs_expire_all_delegations(clp); + } else { ++ int ret; ++ + /* Queue an asynchronous RENEW. */ +- ops->sched_state_renewal(clp, cred, renew_flags); ++ ret = ops->sched_state_renewal(clp, cred, renew_flags); + put_rpccred(cred); +- goto out_exp; ++ switch (ret) { ++ default: ++ goto out_exp; ++ case -EAGAIN: ++ case -ENOMEM: ++ break; ++ } + } + } else { + dprintk("%s: failed to call renewd. Reason: lease not expired \n", +diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c +index 2c37442ed936..d482b86d0e0b 100644 +--- a/fs/nfs/nfs4state.c ++++ b/fs/nfs/nfs4state.c +@@ -1699,7 +1699,8 @@ restart: + if (status < 0) { + set_bit(ops->owner_flag_bit, &sp->so_flags); + nfs4_put_state_owner(sp); +- return nfs4_recovery_handle_error(clp, status); ++ status = nfs4_recovery_handle_error(clp, status); ++ return (status != 0) ? status : -EAGAIN; + } + + nfs4_put_state_owner(sp); +@@ -1708,7 +1709,7 @@ restart: + spin_unlock(&clp->cl_lock); + } + rcu_read_unlock(); +- return status; ++ return 0; + } + + static int nfs4_check_lease(struct nfs_client *clp) +@@ -1755,7 +1756,6 @@ static int nfs4_handle_reclaim_lease_error(struct nfs_client *clp, int status) + break; + case -NFS4ERR_STALE_CLIENTID: + clear_bit(NFS4CLNT_LEASE_CONFIRM, &clp->cl_state); +- nfs4_state_clear_reclaim_reboot(clp); + nfs4_state_start_reclaim_reboot(clp); + break; + case -NFS4ERR_CLID_INUSE: +@@ -2174,14 +2174,11 @@ static void nfs4_state_manager(struct nfs_client *clp) + section = "reclaim reboot"; + status = nfs4_do_reclaim(clp, + clp->cl_mvops->reboot_recovery_ops); +- if (test_bit(NFS4CLNT_LEASE_EXPIRED, &clp->cl_state) || +- test_bit(NFS4CLNT_SESSION_RESET, &clp->cl_state)) +- continue; +- nfs4_state_end_reclaim_reboot(clp); +- if (test_bit(NFS4CLNT_RECLAIM_NOGRACE, &clp->cl_state)) ++ if (status == -EAGAIN) + continue; + if (status < 0) + goto out_error; ++ nfs4_state_end_reclaim_reboot(clp); + } + + /* Now recover expired state... */ +@@ -2189,9 +2186,7 @@ static void nfs4_state_manager(struct nfs_client *clp) + section = "reclaim nograce"; + status = nfs4_do_reclaim(clp, + clp->cl_mvops->nograce_recovery_ops); +- if (test_bit(NFS4CLNT_LEASE_EXPIRED, &clp->cl_state) || +- test_bit(NFS4CLNT_SESSION_RESET, &clp->cl_state) || +- test_bit(NFS4CLNT_RECLAIM_REBOOT, &clp->cl_state)) ++ if (status == -EAGAIN) + continue; + if (status < 0) + goto out_error; +diff --git a/fs/notify/fanotify/fanotify_user.c b/fs/notify/fanotify/fanotify_user.c +index f1680cdbd88b..9be6b4163406 100644 +--- a/fs/notify/fanotify/fanotify_user.c ++++ b/fs/notify/fanotify/fanotify_user.c +@@ -69,7 +69,7 @@ static int create_fd(struct fsnotify_group *group, + + pr_debug("%s: group=%p event=%p\n", __func__, group, event); + +- client_fd = get_unused_fd(); ++ client_fd = get_unused_fd_flags(group->fanotify_data.f_flags); + if (client_fd < 0) + return client_fd; + +diff --git a/include/linux/compiler-gcc5.h b/include/linux/compiler-gcc5.h +new file mode 100644 +index 000000000000..cdd1cc202d51 +--- /dev/null ++++ b/include/linux/compiler-gcc5.h +@@ -0,0 +1,66 @@ ++#ifndef __LINUX_COMPILER_H ++#error "Please don't include <linux/compiler-gcc5.h> directly, include <linux/compiler.h> instead." ++#endif ++ ++#define __used __attribute__((__used__)) ++#define __must_check __attribute__((warn_unused_result)) ++#define __compiler_offsetof(a, b) __builtin_offsetof(a, b) ++ ++/* Mark functions as cold. gcc will assume any path leading to a call ++ to them will be unlikely. This means a lot of manual unlikely()s ++ are unnecessary now for any paths leading to the usual suspects ++ like BUG(), printk(), panic() etc. [but let's keep them for now for ++ older compilers] ++ ++ Early snapshots of gcc 4.3 don't support this and we can't detect this ++ in the preprocessor, but we can live with this because they're unreleased. ++ Maketime probing would be overkill here. ++ ++ gcc also has a __attribute__((__hot__)) to move hot functions into ++ a special section, but I don't see any sense in this right now in ++ the kernel context */ ++#define __cold __attribute__((__cold__)) ++ ++#define __UNIQUE_ID(prefix) __PASTE(__PASTE(__UNIQUE_ID_, prefix), __COUNTER__) ++ ++#ifndef __CHECKER__ ++# define __compiletime_warning(message) __attribute__((warning(message))) ++# define __compiletime_error(message) __attribute__((error(message))) ++#endif /* __CHECKER__ */ ++ ++/* ++ * Mark a position in code as unreachable. This can be used to ++ * suppress control flow warnings after asm blocks that transfer ++ * control elsewhere. ++ * ++ * Early snapshots of gcc 4.5 don't support this and we can't detect ++ * this in the preprocessor, but we can live with this because they're ++ * unreleased. Really, we need to have autoconf for the kernel. ++ */ ++#define unreachable() __builtin_unreachable() ++ ++/* Mark a function definition as prohibited from being cloned. */ ++#define __noclone __attribute__((__noclone__)) ++ ++/* ++ * Tell the optimizer that something else uses this function or variable. ++ */ ++#define __visible __attribute__((externally_visible)) ++ ++/* ++ * GCC 'asm goto' miscompiles certain code sequences: ++ * ++ * http://gcc.gnu.org/bugzilla/show_bug.cgi?id=58670 ++ * ++ * Work it around via a compiler barrier quirk suggested by Jakub Jelinek. ++ * Fixed in GCC 4.8.2 and later versions. ++ * ++ * (asm goto is automatically volatile - the naming reflects this.) ++ */ ++#define asm_volatile_goto(x...) do { asm goto(x); asm (""); } while (0) ++ ++#ifdef CONFIG_ARCH_USE_BUILTIN_BSWAP ++#define __HAVE_BUILTIN_BSWAP32__ ++#define __HAVE_BUILTIN_BSWAP64__ ++#define __HAVE_BUILTIN_BSWAP16__ ++#endif /* CONFIG_ARCH_USE_BUILTIN_BSWAP */ +diff --git a/include/linux/sched.h b/include/linux/sched.h +index 8293545ac9b7..f87e9a8d364f 100644 +--- a/include/linux/sched.h ++++ b/include/linux/sched.h +@@ -1670,11 +1670,13 @@ extern void thread_group_cputime_adjusted(struct task_struct *p, cputime_t *ut, + #define tsk_used_math(p) ((p)->flags & PF_USED_MATH) + #define used_math() tsk_used_math(current) + +-/* __GFP_IO isn't allowed if PF_MEMALLOC_NOIO is set in current->flags */ ++/* __GFP_IO isn't allowed if PF_MEMALLOC_NOIO is set in current->flags ++ * __GFP_FS is also cleared as it implies __GFP_IO. ++ */ + static inline gfp_t memalloc_noio_flags(gfp_t flags) + { + if (unlikely(current->flags & PF_MEMALLOC_NOIO)) +- flags &= ~__GFP_IO; ++ flags &= ~(__GFP_IO | __GFP_FS); + return flags; + } + +diff --git a/lib/lzo/lzo1x_decompress_safe.c b/lib/lzo/lzo1x_decompress_safe.c +index 8563081e8da3..a1c387f6afba 100644 +--- a/lib/lzo/lzo1x_decompress_safe.c ++++ b/lib/lzo/lzo1x_decompress_safe.c +@@ -19,31 +19,21 @@ + #include <linux/lzo.h> + #include "lzodefs.h" + +-#define HAVE_IP(t, x) \ +- (((size_t)(ip_end - ip) >= (size_t)(t + x)) && \ +- (((t + x) >= t) && ((t + x) >= x))) ++#define HAVE_IP(x) ((size_t)(ip_end - ip) >= (size_t)(x)) ++#define HAVE_OP(x) ((size_t)(op_end - op) >= (size_t)(x)) ++#define NEED_IP(x) if (!HAVE_IP(x)) goto input_overrun ++#define NEED_OP(x) if (!HAVE_OP(x)) goto output_overrun ++#define TEST_LB(m_pos) if ((m_pos) < out) goto lookbehind_overrun + +-#define HAVE_OP(t, x) \ +- (((size_t)(op_end - op) >= (size_t)(t + x)) && \ +- (((t + x) >= t) && ((t + x) >= x))) +- +-#define NEED_IP(t, x) \ +- do { \ +- if (!HAVE_IP(t, x)) \ +- goto input_overrun; \ +- } while (0) +- +-#define NEED_OP(t, x) \ +- do { \ +- if (!HAVE_OP(t, x)) \ +- goto output_overrun; \ +- } while (0) +- +-#define TEST_LB(m_pos) \ +- do { \ +- if ((m_pos) < out) \ +- goto lookbehind_overrun; \ +- } while (0) ++/* This MAX_255_COUNT is the maximum number of times we can add 255 to a base ++ * count without overflowing an integer. The multiply will overflow when ++ * multiplying 255 by more than MAXINT/255. The sum will overflow earlier ++ * depending on the base count. Since the base count is taken from a u8 ++ * and a few bits, it is safe to assume that it will always be lower than ++ * or equal to 2*255, thus we can always prevent any overflow by accepting ++ * two less 255 steps. See Documentation/lzo.txt for more information. ++ */ ++#define MAX_255_COUNT ((((size_t)~0) / 255) - 2) + + int lzo1x_decompress_safe(const unsigned char *in, size_t in_len, + unsigned char *out, size_t *out_len) +@@ -75,17 +65,24 @@ int lzo1x_decompress_safe(const unsigned char *in, size_t in_len, + if (t < 16) { + if (likely(state == 0)) { + if (unlikely(t == 0)) { ++ size_t offset; ++ const unsigned char *ip_last = ip; ++ + while (unlikely(*ip == 0)) { +- t += 255; + ip++; +- NEED_IP(1, 0); ++ NEED_IP(1); + } +- t += 15 + *ip++; ++ offset = ip - ip_last; ++ if (unlikely(offset > MAX_255_COUNT)) ++ return LZO_E_ERROR; ++ ++ offset = (offset << 8) - offset; ++ t += offset + 15 + *ip++; + } + t += 3; + copy_literal_run: + #if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) +- if (likely(HAVE_IP(t, 15) && HAVE_OP(t, 15))) { ++ if (likely(HAVE_IP(t + 15) && HAVE_OP(t + 15))) { + const unsigned char *ie = ip + t; + unsigned char *oe = op + t; + do { +@@ -101,8 +98,8 @@ copy_literal_run: + } else + #endif + { +- NEED_OP(t, 0); +- NEED_IP(t, 3); ++ NEED_OP(t); ++ NEED_IP(t + 3); + do { + *op++ = *ip++; + } while (--t > 0); +@@ -115,7 +112,7 @@ copy_literal_run: + m_pos -= t >> 2; + m_pos -= *ip++ << 2; + TEST_LB(m_pos); +- NEED_OP(2, 0); ++ NEED_OP(2); + op[0] = m_pos[0]; + op[1] = m_pos[1]; + op += 2; +@@ -136,13 +133,20 @@ copy_literal_run: + } else if (t >= 32) { + t = (t & 31) + (3 - 1); + if (unlikely(t == 2)) { ++ size_t offset; ++ const unsigned char *ip_last = ip; ++ + while (unlikely(*ip == 0)) { +- t += 255; + ip++; +- NEED_IP(1, 0); ++ NEED_IP(1); + } +- t += 31 + *ip++; +- NEED_IP(2, 0); ++ offset = ip - ip_last; ++ if (unlikely(offset > MAX_255_COUNT)) ++ return LZO_E_ERROR; ++ ++ offset = (offset << 8) - offset; ++ t += offset + 31 + *ip++; ++ NEED_IP(2); + } + m_pos = op - 1; + next = get_unaligned_le16(ip); +@@ -154,13 +158,20 @@ copy_literal_run: + m_pos -= (t & 8) << 11; + t = (t & 7) + (3 - 1); + if (unlikely(t == 2)) { ++ size_t offset; ++ const unsigned char *ip_last = ip; ++ + while (unlikely(*ip == 0)) { +- t += 255; + ip++; +- NEED_IP(1, 0); ++ NEED_IP(1); + } +- t += 7 + *ip++; +- NEED_IP(2, 0); ++ offset = ip - ip_last; ++ if (unlikely(offset > MAX_255_COUNT)) ++ return LZO_E_ERROR; ++ ++ offset = (offset << 8) - offset; ++ t += offset + 7 + *ip++; ++ NEED_IP(2); + } + next = get_unaligned_le16(ip); + ip += 2; +@@ -174,7 +185,7 @@ copy_literal_run: + #if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) + if (op - m_pos >= 8) { + unsigned char *oe = op + t; +- if (likely(HAVE_OP(t, 15))) { ++ if (likely(HAVE_OP(t + 15))) { + do { + COPY8(op, m_pos); + op += 8; +@@ -184,7 +195,7 @@ copy_literal_run: + m_pos += 8; + } while (op < oe); + op = oe; +- if (HAVE_IP(6, 0)) { ++ if (HAVE_IP(6)) { + state = next; + COPY4(op, ip); + op += next; +@@ -192,7 +203,7 @@ copy_literal_run: + continue; + } + } else { +- NEED_OP(t, 0); ++ NEED_OP(t); + do { + *op++ = *m_pos++; + } while (op < oe); +@@ -201,7 +212,7 @@ copy_literal_run: + #endif + { + unsigned char *oe = op + t; +- NEED_OP(t, 0); ++ NEED_OP(t); + op[0] = m_pos[0]; + op[1] = m_pos[1]; + op += 2; +@@ -214,15 +225,15 @@ match_next: + state = next; + t = next; + #if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) +- if (likely(HAVE_IP(6, 0) && HAVE_OP(4, 0))) { ++ if (likely(HAVE_IP(6) && HAVE_OP(4))) { + COPY4(op, ip); + op += t; + ip += t; + } else + #endif + { +- NEED_IP(t, 3); +- NEED_OP(t, 0); ++ NEED_IP(t + 3); ++ NEED_OP(t); + while (t > 0) { + *op++ = *ip++; + t--; +diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c +index f92818155958..175dca44c97e 100644 +--- a/sound/core/pcm_native.c ++++ b/sound/core/pcm_native.c +@@ -3197,7 +3197,7 @@ static const struct vm_operations_struct snd_pcm_vm_ops_data_fault = { + + #ifndef ARCH_HAS_DMA_MMAP_COHERENT + /* This should be defined / handled globally! */ +-#ifdef CONFIG_ARM ++#if defined(CONFIG_ARM) || defined(CONFIG_ARM64) + #define ARCH_HAS_DMA_MMAP_COHERENT + #endif + #endif +diff --git a/sound/pci/emu10k1/emu10k1_callback.c b/sound/pci/emu10k1/emu10k1_callback.c +index cae36597aa71..0a34b5f1c475 100644 +--- a/sound/pci/emu10k1/emu10k1_callback.c ++++ b/sound/pci/emu10k1/emu10k1_callback.c +@@ -85,6 +85,8 @@ snd_emu10k1_ops_setup(struct snd_emux *emux) + * get more voice for pcm + * + * terminate most inactive voice and give it as a pcm voice. ++ * ++ * voice_lock is already held. + */ + int + snd_emu10k1_synth_get_voice(struct snd_emu10k1 *hw) +@@ -92,12 +94,10 @@ snd_emu10k1_synth_get_voice(struct snd_emu10k1 *hw) + struct snd_emux *emu; + struct snd_emux_voice *vp; + struct best_voice best[V_END]; +- unsigned long flags; + int i; + + emu = hw->synth; + +- spin_lock_irqsave(&emu->voice_lock, flags); + lookup_voices(emu, hw, best, 1); /* no OFF voices */ + for (i = 0; i < V_END; i++) { + if (best[i].voice >= 0) { +@@ -113,11 +113,9 @@ snd_emu10k1_synth_get_voice(struct snd_emu10k1 *hw) + vp->emu->num_voices--; + vp->ch = -1; + vp->state = SNDRV_EMUX_ST_OFF; +- spin_unlock_irqrestore(&emu->voice_lock, flags); + return ch; + } + } +- spin_unlock_irqrestore(&emu->voice_lock, flags); + + /* not found */ + return -ENOMEM; +diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h +index 8b75bcf136f6..d5bed1d25713 100644 +--- a/sound/usb/quirks-table.h ++++ b/sound/usb/quirks-table.h +@@ -386,6 +386,36 @@ YAMAHA_DEVICE(0x105d, NULL), + } + }, + { ++ USB_DEVICE(0x0499, 0x1509), ++ .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) { ++ /* .vendor_name = "Yamaha", */ ++ /* .product_name = "Steinberg UR22", */ ++ .ifnum = QUIRK_ANY_INTERFACE, ++ .type = QUIRK_COMPOSITE, ++ .data = (const struct snd_usb_audio_quirk[]) { ++ { ++ .ifnum = 1, ++ .type = QUIRK_AUDIO_STANDARD_INTERFACE ++ }, ++ { ++ .ifnum = 2, ++ .type = QUIRK_AUDIO_STANDARD_INTERFACE ++ }, ++ { ++ .ifnum = 3, ++ .type = QUIRK_MIDI_YAMAHA ++ }, ++ { ++ .ifnum = 4, ++ .type = QUIRK_IGNORE_INTERFACE ++ }, ++ { ++ .ifnum = -1 ++ } ++ } ++ } ++}, ++{ + USB_DEVICE(0x0499, 0x150a), + .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) { + /* .vendor_name = "Yamaha", */ +diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c +index 8cf1cd2fadaa..a17f190be58e 100644 +--- a/virt/kvm/kvm_main.c ++++ b/virt/kvm/kvm_main.c +@@ -52,6 +52,7 @@ + + #include <asm/processor.h> + #include <asm/io.h> ++#include <asm/ioctl.h> + #include <asm/uaccess.h> + #include <asm/pgtable.h> + +@@ -1981,6 +1982,9 @@ static long kvm_vcpu_ioctl(struct file *filp, + if (vcpu->kvm->mm != current->mm) + return -EIO; + ++ if (unlikely(_IOC_TYPE(ioctl) != KVMIO)) ++ return -EINVAL; ++ + #if defined(CONFIG_S390) || defined(CONFIG_PPC) || defined(CONFIG_MIPS) + /* + * Special cases: vcpu ioctls that are asynchronous to vcpu execution, |