1. 06 Jul, 2021 4 commits
  2. 01 Nov, 2020 1 commit
  3. 03 Sep, 2020 1 commit
    • Jan Kara's avatar
      writeback: Avoid skipping inode writeback · d74c235b
      Jan Kara authored
      
      
      commit 5afced3bf28100d81fb2fe7e98918632a08feaf5 upstream.
      
      Inode's i_io_list list head is used to attach inode to several different
      lists - wb->{b_dirty, b_dirty_time, b_io, b_more_io}. When flush worker
      prepares a list of inodes to writeback e.g. for sync(2), it moves inodes
      to b_io list. Thus it is critical for sync(2) data integrity guarantees
      that inode is not requeued to any other writeback list when inode is
      queued for processing by flush worker. That's the reason why
      writeback_single_inode() does not touch i_io_list (unless the inode is
      completely clean) and why __mark_inode_dirty() does not touch i_io_list
      if I_SYNC flag is set.
      
      However there are two flaws in the current logic:
      
      1) When inode has only I_DIRTY_TIME set but it is already queued in b_io
      list due to sync(2), concurrent __mark_inode_dirty(inode, I_DIRTY_SYNC)
      can still move inode back to b_dirty list resulting in skipping
      writeback of inode time stamps during sync(2).
      
      2) When inode is on b_dirty_time list and writeback_single_inode() races
      with __mark_inode_dirty() like:
      
      writeback_single_inode()		__mark_inode_dirty(inode, I_DIRTY_PAGES)
        inode->i_state |= I_SYNC
        __writeback_single_inode()
      					  inode->i_state |= I_DIRTY_PAGES;
      					  if (inode->i_state & I_SYNC)
      					    bail
        if (!(inode->i_state & I_DIRTY_ALL))
        - not true so nothing done
      
      We end up with I_DIRTY_PAGES inode on b_dirty_time list and thus
      standard background writeback will not writeback this inode leading to
      possible dirty throttling stalls etc. (thanks to Martijn Coenen for this
      analysis).
      
      Fix these problems by tracking whether inode is queued in b_io or
      b_more_io lists in a new I_SYNC_QUEUED flag. When this flag is set, we
      know flush worker has queued inode and we should not touch i_io_list.
      On the other hand we also know that once flush worker is done with the
      inode it will requeue the inode to appropriate dirty list. When
      I_SYNC_QUEUED is not set, __mark_inode_dirty() can (and must) move inode
      to appropriate dirty list.
      Reported-by: default avatarMartijn Coenen <maco@android.com>
      Reviewed-by: default avatarMartijn Coenen <maco@android.com>
      Tested-by: default avatarMartijn Coenen <maco@android.com>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Fixes: 0ae45f63
      
       ("vfs: add support for a lazytime mount option")
      CC: stable@vger.kernel.org
      Signed-off-by: default avatarJan Kara <jack@suse.cz>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      d74c235b
  4. 21 Aug, 2020 1 commit
    • Mike Kravetz's avatar
      hugetlbfs: remove call to huge_pte_alloc without i_mmap_rwsem · 70bd1017
      Mike Kravetz authored
      commit 34ae204f18519f0920bd50a644abd6fefc8dbfcf upstream.
      
      Commit c0d0381a ("hugetlbfs: use i_mmap_rwsem for more pmd sharing
      synchronization") requires callers of huge_pte_alloc to hold i_mmap_rwsem
      in at least read mode.  This is because the explicit locking in
      huge_pmd_share (called by huge_pte_alloc) was removed.  When restructuring
      the code, the call to huge_pte_alloc in the else block at the beginning of
      hugetlb_fault was missed.
      
      Unfortunately, that else clause is exercised when there is no page table
      entry.  This will likely lead to a call to huge_pmd_share.  If
      huge_pmd_share thinks pmd sharing is possible, it will traverse the
      mapping tree (i_mmap) without holding i_mmap_rwsem.  If someone else is
      modifying the tree, bad things such as addressing exceptions or worse
      could happen.
      
      Simply remove the else clause.  It should have been removed previously.
      The code following the else will call huge_pte_alloc with the appropriate
      locking.
      
      To prevent this type of issue in the future, add routines to assert that
      i_mmap_rwsem is held, and call these routines in huge pmd sharing
      routines.
      
      Fixes: c0d0381a
      
       ("hugetlbfs: use i_mmap_rwsem for more pmd sharing synchronization")
      Suggested-by: default avatarMatthew Wilcox <willy@infradead.org>
      Signed-off-by: default avatarMike Kravetz <mike.kravetz@oracle.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: "Kirill A.Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Prakash Sangappa <prakash.sangappa@oracle.com>
      Cc: <stable@vger.kernel.org>
      Link: http://lkml.kernel.org/r/e670f327-5cf9-1959-96e4-6dc7cc30d3d5@oracle.com
      
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      70bd1017
  5. 08 Jul, 2020 2 commits
  6. 07 Jul, 2020 1 commit
  7. 18 Jun, 2020 1 commit
  8. 09 Jun, 2020 2 commits
  9. 04 Jun, 2020 2 commits
  10. 02 Jun, 2020 2 commits
    • Matthew Wilcox (Oracle)'s avatar
      mm: add readahead address space operation · 8151b4c8
      Matthew Wilcox (Oracle) authored
      
      
      This replaces ->readpages with a saner interface:
       - Return void instead of an ignored error code.
       - Page cache is already populated with locked pages when ->readahead
         is called.
       - New arguments can be passed to the implementation without changing
         all the filesystems that use a common helper function like
         mpage_readahead().
      Signed-off-by: default avatarMatthew Wilcox (Oracle) <willy@infradead.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: default avatarJohn Hubbard <jhubbard@nvidia.com>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Reviewed-by: default avatarWilliam Kucharski <william.kucharski@oracle.com>
      Cc: Chao Yu <yuchao0@huawei.com>
      Cc: Cong Wang <xiyou.wangcong@gmail.com>
      Cc: Darrick J. Wong <darrick.wong@oracle.com>
      Cc: Dave Chinner <dchinner@redhat.com>
      Cc: Eric Biggers <ebiggers@google.com>
      Cc: Gao Xiang <gaoxiang25@huawei.com>
      Cc: Jaegeuk Kim <jaegeuk@kernel.org>
      Cc: Joseph Qi <joseph.qi@linux.alibaba.com>
      Cc: Junxiao Bi <junxiao.bi@oracle.com>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Zi Yan <ziy@nvidia.com>
      Cc: Johannes Thumshirn <johannes.thumshirn@wdc.com>
      Cc: Miklos Szeredi <mszeredi@redhat.com>
      Link: http://lkml.kernel.org/r/20200414150233.24495-12-willy@infradead.org
      
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      8151b4c8
    • Jeff Layton's avatar
      vfs: track per-sb writeback errors and report them to syncfs · 735e4ae5
      Jeff Layton authored
      
      
      Patch series "vfs: have syncfs() return error when there are writeback
      errors", v6.
      
      Currently, syncfs does not return errors when one of the inodes fails to
      be written back.  It will return errors based on the legacy AS_EIO and
      AS_ENOSPC flags when syncing out the block device fails, but that's not
      particularly helpful for filesystems that aren't backed by a blockdev.
      It's also possible for a stray sync to lose those errors.
      
      The basic idea in this set is to track writeback errors at the
      superblock level, so that we can quickly and easily check whether
      something bad happened without having to fsync each file individually.
      syncfs is then changed to reliably report writeback errors after they
      occur, much in the same fashion as fsync does now.
      
      This patch (of 2):
      
      Usually we suggest that applications call fsync when they want to ensure
      that all data written to the file has made it to the backing store, but
      that can be inefficient when there are a lot of open files.
      
      Calling syncfs on the filesystem can be more efficient in some
      situations, but the error reporting doesn't currently work the way most
      people expect.  If a single inode on a filesystem reports a writeback
      error, syncfs won't necessarily return an error.  syncfs only returns an
      error if __sync_blockdev fails, and on some filesystems that's a no-op.
      
      It would be better if syncfs reported an error if there were any
      writeback failures.  Then applications could call syncfs to see if there
      are any errors on any open files, and could then call fsync on all of
      the other descriptors to figure out which one failed.
      
      This patch adds a new errseq_t to struct super_block, and has
      mapping_set_error also record writeback errors there.
      
      To report those errors, we also need to keep an errseq_t in struct file
      to act as a cursor.  This patch adds a dedicated field for that purpose,
      which slots nicely into 4 bytes of padding at the end of struct file on
      x86_64.
      
      An earlier version of this patch used an O_PATH file descriptor to cue
      the kernel that the open file should track the superblock error and not
      the inode's writeback error.
      
      I think that API is just too weird though.  This is simpler and should
      make syncfs error reporting "just work" even if someone is multiplexing
      fsync and syncfs on the same fds.
      Signed-off-by: default avatarJeff Layton <jlayton@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: default avatarJan Kara <jack@suse.cz>
      Cc: Andres Freund <andres@anarazel.de>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: David Howells <dhowells@redhat.com>
      Link: http://lkml.kernel.org/r/20200428135155.19223-1-jlayton@kernel.org
      Link: http://lkml.kernel.org/r/20200428135155.19223-2-jlayton@kernel.org
      
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      735e4ae5
  11. 31 May, 2020 1 commit
    • David Howells's avatar
      vfs, afs, ext4: Make the inode hash table RCU searchable · 3f19b2ab
      David Howells authored
      
      
      Make the inode hash table RCU searchable so that searches that want to
      access or modify an inode without taking a ref on that inode can do so
      without taking the inode hash table lock.
      
      The main thing this requires is some RCU annotation on the list
      manipulation operations.  Inodes are already freed by RCU in most cases.
      
      Users of this interface must take care as the inode may be still under
      construction or may be being torn down around them.
      
      There are at least three instances where this can be of use:
      
       (1) Testing whether the inode number iunique() is going to return is
           currently unique (the iunique_lock is still held).
      
       (2) Ext4 date stamp updating.
      
       (3) AFS callback breaking.
      Signed-off-by: default avatarDavid Howells <dhowells@redhat.com>
      Acked-by: default avatarKonstantin Khlebnikov <khlebnikov@yandex-team.ru>
      cc: linux-ext4@vger.kernel.org
      cc: linux-afs@lists.infradead.org
      3f19b2ab
  12. 28 May, 2020 1 commit
  13. 25 May, 2020 1 commit
  14. 21 May, 2020 1 commit
  15. 14 May, 2020 1 commit
    • Miklos Szeredi's avatar
      vfs: allow unprivileged whiteout creation · a3c751a5
      Miklos Szeredi authored
      
      
      Whiteouts, unlike real device node should not require privileges to create.
      
      The general concern with device nodes is that opening them can have side
      effects.  The kernel already avoids zero major (see
      Documentation/admin-guide/devices.txt).  To be on the safe side the patch
      explicitly forbids registering a char device with 0/0 number (see
      cdev_add()).
      
      This guarantees that a non-O_PATH open on a whiteout will fail with ENODEV;
      i.e. it won't have any side effect.
      Signed-off-by: default avatarMiklos Szeredi <mszeredi@redhat.com>
      a3c751a5
  16. 13 May, 2020 3 commits
  17. 09 May, 2020 1 commit
    • J. Bruce Fields's avatar
      nfsd: clients don't need to break their own delegations · 28df3d15
      J. Bruce Fields authored
      
      
      We currently revoke read delegations on any write open or any operation
      that modifies file data or metadata (including rename, link, and
      unlink).  But if the delegation in question is the only read delegation
      and is held by the client performing the operation, that's not really
      necessary.
      
      It's not always possible to prevent this in the NFSv4.0 case, because
      there's not always a way to determine which client an NFSv4.0 delegation
      came from.  (In theory we could try to guess this from the transport
      layer, e.g., by assuming all traffic on a given TCP connection comes
      from the same client.  But that's not really correct.)
      
      In the NFSv4.1 case the session layer always tells us the client.
      
      This patch should remove such self-conflicts in all cases where we can
      reliably determine the client from the compound.
      
      To do that we need to track "who" is performing a given (possibly
      lease-breaking) file operation.  We're doing that by storing the
      information in the svc_rqst and using kthread_data() to map the current
      task back to a svc_rqst.
      Signed-off-by: default avatarJ. Bruce Fields <bfields@redhat.com>
      28df3d15
  18. 04 May, 2020 2 commits
  19. 27 Apr, 2020 1 commit
  20. 20 Apr, 2020 2 commits
  21. 02 Apr, 2020 1 commit
    • Mike Kravetz's avatar
      hugetlbfs: use i_mmap_rwsem for more pmd sharing synchronization · c0d0381a
      Mike Kravetz authored
      Patch series "hugetlbfs: use i_mmap_rwsem for more synchronization", v2.
      
      While discussing the issue with huge_pte_offset [1], I remembered that
      there were more outstanding hugetlb races.  These issues are:
      
      1) For shared pmds, huge PTE pointers returned by huge_pte_alloc can become
         invalid via a call to huge_pmd_unshare by another thread.
      2) hugetlbfs page faults can race with truncation causing invalid global
         reserve counts and state.
      
      A previous attempt was made to use i_mmap_rwsem in this manner as
      described at [2].  However, those patches were reverted starting with [3]
      due to locking issues.
      
      To effectively use i_mmap_rwsem to address the above issues it needs to be
      held (in read mode) during page fault processing.  However, during fault
      processing we need to lock the page we will be adding.  Lock ordering
      requires we take page lock before i_mmap_rwsem.  Waiting until after
      taking the page lock is too late in the fault process for the
      synchronization we want to do.
      
      To address this lock ordering issue, the following patches change the lock
      ordering for hugetlb pages.  This is not too invasive as hugetlbfs
      processing is done separate from core mm in many places.  However, I don't
      really like this idea.  Much ugliness is contained in the new routine
      hugetlb_page_mapping_lock_write() of patch 1.
      
      The only other way I can think of to address these issues is by catching
      all the races.  After catching a race, cleanup, backout, retry ...  etc,
      as needed.  This can get really ugly, especially for huge page
      reservations.  At one time, I started writing some of the reservation
      backout code for page faults and it got so ugly and complicated I went
      down the path of adding synchronization to avoid the races.  Any other
      suggestions would be welcome.
      
      [1] https://lore.kernel.org/linux-mm/1582342427-230392-1-git-send-email-longpeng2@huawei.com/
      [2] https://lore.kernel.org/linux-mm/20181222223013.22193-1-mike.kravetz@oracle.com/
      [3] https://lore.kernel.org/linux-mm/20190103235452.29335-1-mike.kravetz@oracle.com
      [4] https://lore.kernel.org/linux-mm/1584028670.7365.182.camel@lca.pw/
      [5] https://lore.kernel.org/lkml/20200312183142.108df9ac@canb.auug.org.au/
      
      
      
      This patch (of 2):
      
      While looking at BUGs associated with invalid huge page map counts, it was
      discovered and observed that a huge pte pointer could become 'invalid' and
      point to another task's page table.  Consider the following:
      
      A task takes a page fault on a shared hugetlbfs file and calls
      huge_pte_alloc to get a ptep.  Suppose the returned ptep points to a
      shared pmd.
      
      Now, another task truncates the hugetlbfs file.  As part of truncation, it
      unmaps everyone who has the file mapped.  If the range being truncated is
      covered by a shared pmd, huge_pmd_unshare will be called.  For all but the
      last user of the shared pmd, huge_pmd_unshare will clear the pud pointing
      to the pmd.  If the task in the middle of the page fault is not the last
      user, the ptep returned by huge_pte_alloc now points to another task's
      page table or worse.  This leads to bad things such as incorrect page
      map/reference counts or invalid memory references.
      
      To fix, expand the use of i_mmap_rwsem as follows:
      - i_mmap_rwsem is held in read mode whenever huge_pmd_share is called.
        huge_pmd_share is only called via huge_pte_alloc, so callers of
        huge_pte_alloc take i_mmap_rwsem before calling.  In addition, callers
        of huge_pte_alloc continue to hold the semaphore until finished with
        the ptep.
      - i_mmap_rwsem is held in write mode whenever huge_pmd_unshare is called.
      
      One problem with this scheme is that it requires taking i_mmap_rwsem
      before taking the page lock during page faults.  This is not the order
      specified in the rest of mm code.  Handling of hugetlbfs pages is mostly
      isolated today.  Therefore, we use this alternative locking order for
      PageHuge() pages.
      
               mapping->i_mmap_rwsem
                 hugetlb_fault_mutex (hugetlbfs specific page fault mutex)
                   page->flags PG_locked (lock_page)
      
      To help with lock ordering issues, hugetlb_page_mapping_lock_write() is
      introduced to write lock the i_mmap_rwsem associated with a page.
      
      In most cases it is easy to get address_space via vma->vm_file->f_mapping.
      However, in the case of migration or memory errors for anon pages we do
      not have an associated vma.  A new routine _get_hugetlb_page_mapping()
      will use anon_vma to get address_space in these cases.
      Signed-off-by: default avatarMike Kravetz <mike.kravetz@oracle.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: "Aneesh Kumar K . V" <aneesh.kumar@linux.vnet.ibm.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Prakash Sangappa <prakash.sangappa@oracle.com>
      Link: http://lkml.kernel.org/r/20200316205756.146666-2-mike.kravetz@oracle.com
      
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      c0d0381a
  22. 24 Mar, 2020 2 commits
    • Thomas Hellstrom (VMware)'s avatar
      fs: Constify vma argument to vma_is_dax · f05a3849
      Thomas Hellstrom (VMware) authored
      
      
      The function is used by upcoming vma_is_special_huge() with which we want
      to use a const vma argument. Since for vma_is_dax() the vma argument is
      only dereferenced for reading, constify it.
      
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Ralph Campbell <rcampbell@nvidia.com>
      Cc: "Jérôme Glisse" <jglisse@redhat.com>
      Cc: "Christian König" <christian.koenig@amd.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Signed-off-by: default avatarThomas Hellstrom (VMware) <thomas_os@shipmail.org>
      Reviewed-by: default avatarRoland Scheidegger <sroland@vmware.com>
      Acked-by: default avatarChristian König <christian.koenig@amd.com>
      f05a3849
    • Christoph Hellwig's avatar
      block: remove __bdevname · ea3edd4d
      Christoph Hellwig authored
      
      
      There is no good reason for __bdevname to exist.  Just open code
      printing the string in the callers.  For three of them the format
      string can be trivially merged into existing printk statements,
      and in init/do_mounts.c we can at least do the scnprintf once at
      the start of the function, and unconditional of CONFIG_BLOCK to
      make the output for tiny configfs a little more helpful.
      
      Acked-by: Theodore Ts'o <tytso@mit.edu> # for ext4
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      ea3edd4d
  23. 20 Mar, 2020 1 commit
    • Hans de Goede's avatar
      firmware: Add new platform fallback mechanism and firmware_request_platform() · e4c2c0ff
      Hans de Goede authored
      
      
      In some cases the platform's main firmware (e.g. the UEFI fw) may contain
      an embedded copy of device firmware which needs to be (re)loaded into the
      peripheral. Normally such firmware would be part of linux-firmware, but in
      some cases this is not feasible, for 2 reasons:
      
      1) The firmware is customized for a specific use-case of the chipset / use
      with a specific hardware model, so we cannot have a single firmware file
      for the chipset. E.g. touchscreen controller firmwares are compiled
      specifically for the hardware model they are used with, as they are
      calibrated for a specific model digitizer.
      
      2) Despite repeated attempts we have failed to get permission to
      redistribute the firmware. This is especially a problem with customized
      firmwares, these get created by the chip vendor for a specific ODM and the
      copyright may partially belong with the ODM, so the chip vendor cannot
      give a blanket permission to distribute these.
      
      This commit adds a new platform fallback mechanism to the firmware loader
      which will try to lookup a device fw copy embedded in the platform's main
      firmware if direct filesystem lookup fails.
      
      Drivers which need such embedded fw copies can enable this fallback
      mechanism by using the new firmware_request_platform() function.
      
      Note that for now this is only supported on EFI platforms and even on
      these platforms firmware_fallback_platform() only works if
      CONFIG_EFI_EMBEDDED_FIRMWARE is enabled (this gets selected by drivers
      which need this), in all other cases firmware_fallback_platform() simply
      always returns -ENOENT.
      Reported-by: default avatarDave Olsthoorn <dave@bewaar.me>
      Suggested-by: default avatarPeter Jones <pjones@redhat.com>
      Acked-by: default avatarLuis Chamberlain <mcgrof@kernel.org>
      Signed-off-by: default avatarHans de Goede <hdegoede@redhat.com>
      Link: https://lore.kernel.org/r/20200115163554.101315-5-hdegoede@redhat.com
      
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      e4c2c0ff
  24. 06 Mar, 2020 1 commit
  25. 10 Feb, 2020 1 commit
    • Topi Miettinen's avatar
      firmware_loader: load files from the mount namespace of init · 901cff7c
      Topi Miettinen authored
      I have an experimental setup where almost every possible system
      service (even early startup ones) runs in separate namespace, using a
      dedicated, minimal file system. In process of minimizing the contents
      of the file systems with regards to modules and firmware files, I
      noticed that in my system, the firmware files are loaded from three
      different mount namespaces, those of systemd-udevd, init and
      systemd-networkd. The logic of the source namespace is not very clear,
      it seems to depend on the driver, but the namespace of the current
      process is used.
      
      So, this patch tries to make things a bit clearer and changes the
      loading of firmware files only from the mount namespace of init. This
      may also improve security, though I think that using firmware files as
      attack vector could be too impractical anyway.
      
      Later, it might make sense to make the mount namespace configurable,
      for example with a new file in /proc/sys/kernel/firmware_config/. That
      would allow a dedicated file system only for firmware files and those
      need not be present anywhere else. This configurability would make
      more sense if made also for kernel modules and /sbin/modprobe. Modules
      are already loaded from init namespace (usermodehelper uses kthreadd
      namespace) except when directly loaded by systemd-udevd.
      
      Instead of using the mount namespace of the current process to load
      firmware files, use the mount namespace of init process.
      
      Link: https://lore.kernel.org/lkml/bb46ebae-4746-90d9-ec5b-fce4c9328c86@gmail.com/
      Link: https://lore.kernel.org/lkml/0e3f7653-c59d-9341-9db2-c88f5b988c68@gmail.com/
      
      Signed-off-by: default avatarTopi Miettinen <toiwoton@gmail.com>
      Link: https://lore.kernel.org/r/20200123125839.37168-1-toiwoton@gmail.com
      
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      901cff7c
  26. 07 Feb, 2020 1 commit
  27. 03 Feb, 2020 1 commit
    • Carlos Maiolino's avatar
      fs: Enable bmap() function to properly return errors · 30460e1e
      Carlos Maiolino authored
      
      
      By now, bmap() will either return the physical block number related to
      the requested file offset or 0 in case of error or the requested offset
      maps into a hole.
      This patch makes the needed changes to enable bmap() to proper return
      errors, using the return value as an error return, and now, a pointer
      must be passed to bmap() to be filled with the mapped physical block.
      
      It will change the behavior of bmap() on return:
      
      - negative value in case of error
      - zero on success or map fell into a hole
      
      In case of a hole, the *block will be zero too
      
      Since this is a prep patch, by now, the only error return is -EINVAL if
      ->bmap doesn't exist.
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarCarlos Maiolino <cmaiolino@redhat.com>
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      30460e1e
  28. 31 Jan, 2020 1 commit