@@ -519,10 +519,9 @@ routines, e.g.:::
519519Part II - Non-coherent DMA allocations
520520--------------------------------------
521521
522- These APIs allow to allocate pages in the kernel direct mapping that are
523- guaranteed to be DMA addressable. This means that unlike dma_alloc_coherent,
524- virt_to_page can be called on the resulting address, and the resulting
525- struct page can be used for everything a struct page is suitable for.
522+ These APIs allow to allocate pages that are guaranteed to be DMA addressable
523+ by the passed in device, but which need explicit management of memory ownership
524+ for the kernel vs the device.
526525
527526If you don't understand how cache line coherency works between a processor and
528527an I/O device, you should not be using this part of the API.
@@ -537,7 +536,7 @@ an I/O device, you should not be using this part of the API.
537536This routine allocates a region of <size> bytes of consistent memory. It
538537returns a pointer to the allocated region (in the processor's virtual address
539538space) or NULL if the allocation failed. The returned memory may or may not
540- be in the kernels direct mapping. Drivers must not call virt_to_page on
539+ be in the kernel direct mapping. Drivers must not call virt_to_page on
541540the returned memory region.
542541
543542It also returns a <dma_handle> which may be cast to an unsigned integer the
@@ -565,7 +564,45 @@ reused.
565564Free a region of memory previously allocated using dma_alloc_noncoherent().
566565dev, size and dma_handle and dir must all be the same as those passed into
567566dma_alloc_noncoherent(). cpu_addr must be the virtual address returned by
568- the dma_alloc_noncoherent().
567+ dma_alloc_noncoherent().
568+
569+ ::
570+
571+ struct page *
572+ dma_alloc_pages(struct device *dev, size_t size, dma_addr_t *dma_handle,
573+ enum dma_data_direction dir, gfp_t gfp)
574+
575+ This routine allocates a region of <size> bytes of non-coherent memory. It
576+ returns a pointer to first struct page for the region, or NULL if the
577+ allocation failed. The resulting struct page can be used for everything a
578+ struct page is suitable for.
579+
580+ It also returns a <dma_handle> which may be cast to an unsigned integer the
581+ same width as the bus and given to the device as the DMA address base of
582+ the region.
583+
584+ The dir parameter specified if data is read and/or written by the device,
585+ see dma_map_single() for details.
586+
587+ The gfp parameter allows the caller to specify the ``GFP_ `` flags (see
588+ kmalloc()) for the allocation, but rejects flags used to specify a memory
589+ zone such as GFP_DMA or GFP_HIGHMEM.
590+
591+ Before giving the memory to the device, dma_sync_single_for_device() needs
592+ to be called, and before reading memory written by the device,
593+ dma_sync_single_for_cpu(), just like for streaming DMA mappings that are
594+ reused.
595+
596+ ::
597+
598+ void
599+ dma_free_pages(struct device *dev, size_t size, struct page *page,
600+ dma_addr_t dma_handle, enum dma_data_direction dir)
601+
602+ Free a region of memory previously allocated using dma_alloc_pages().
603+ dev, size and dma_handle and dir must all be the same as those passed into
604+ dma_alloc_noncoherent(). page must be the pointer returned by
605+ dma_alloc_pages().
569606
570607::
571608
0 commit comments