Discussion:
[PATCH 3/3] dmaengine: Fixup dmaengine_prep_slave_single() to be actually useful
(too old to reply)
Lars-Peter Clausen
2012-04-25 18:50:02 UTC
Permalink
From: Kuninori Morimoto <***@renesas.com>

dmaengine_prep_slave_single() is a helper function which is supposed to be used
to prepare a transfer of a single contingous buffer. Currently the function
takes a pointer to such a buffer from which it builds a scatterlist and passes
it on to device_prep_slave_sg. The dmaengine framework requires that any
scatterlist that is passed to device_prep_slave_sg is mapped and it may not be
unmapped until the DMA operation has completed. This is not the here and any use
of dmaengine_prep_slave_single() will lead to undefined behaviour (Most likely a
system crash).

This patch changes dmaengine_prep_slave_single() to take a dma_addr_t instead of
a pointer to a buffer and moves the responsibility of mapping and unmapping the
buffer up to the caller.

Signed-off-by: Kuninori Morimoto <***@renesas.com>
Signed-off-by: Lars-Peter Clausen <***@metafoo.de>

---
Changes since v1:
* Drop call to sg_set_page
* Slightly reword commit message
---
include/linux/dmaengine.h | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
index 676f967..0e6b595 100644
--- a/include/linux/dmaengine.h
+++ b/include/linux/dmaengine.h
@@ -615,11 +615,13 @@ static inline int dmaengine_slave_config(struct dma_chan *chan,
}

static inline struct dma_async_tx_descriptor *dmaengine_prep_slave_single(
- struct dma_chan *chan, void *buf, size_t len,
+ struct dma_chan *chan, dma_addr_t buf, size_t len,
enum dma_transfer_direction dir, unsigned long flags)
{
struct scatterlist sg;
- sg_init_one(&sg, buf, len);
+ sg_init_table(&sg, 1);
+ sg_dma_address(&sg) = buf;
+ sg_dma_len(&sg) = len;

return chan->device->device_prep_slave_sg(chan, &sg, 1,
dir, flags, NULL);
--
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Lars-Peter Clausen
2012-04-25 18:50:02 UTC
Permalink
sg->length may or may not contain the length of the dma region to transfer,
depending on the architecture - dma_sg_len(sg) always will though. For the
architectures which use the drivers modified by this patch it probably is the
case that sg->length contains the dma transfer length. But to be consistent and
future proof change them to use dma_sg_len.

To quote Russel King:
sg->length is meaningless to something performing DMA.

In cases where sg_dma_len(sg) and sg->length are the same storage, then
there's no problem. But scatterlists _can_ (and one some architectures) do
split them - especially when you have an IOMMU which can allow you to
combine a scatterlist into fewer entries.

So, anything using sg->length for the size of a scatterlist's DMA transfer
_after_ a call to dma_map_sg() is almost certainly buggy.

The patch has been generated using the following coccinelle patch:
<smpl>
@@
struct scatterlist *sg;
expression X;
@@
-sg[X].length
+sg_dma_len(&sg[X])
@@
struct scatterlist *sg;
@@
-sg->length
+sg_dma_len(sg)
</smpl>

Signed-off-by: Lars-Peter Clausen <***@metafoo.de>
---
Compile tested only

Btw. does anybody else think that assigning the dma length like
'dma_sg_len(sg) = len' is kind of odd. It's a macro so it works, but it's not
very C-ish.
---
drivers/dma/amba-pl08x.c | 2 +-
drivers/dma/coh901318.c | 2 +-
drivers/dma/imx-dma.c | 12 ++++++------
drivers/dma/imx-sdma.c | 2 +-
drivers/dma/intel_mid_dma.c | 6 +++---
drivers/dma/mxs-dma.c | 6 +++---
drivers/dma/ste_dma40.c | 2 +-
7 files changed, 16 insertions(+), 16 deletions(-)

diff --git a/drivers/dma/amba-pl08x.c b/drivers/dma/amba-pl08x.c
index 003220a..49ecbbb 100644
--- a/drivers/dma/amba-pl08x.c
+++ b/drivers/dma/amba-pl08x.c
@@ -1328,7 +1328,7 @@ static struct dma_async_tx_descriptor *pl08x_prep_slave_sg(
int ret, tmp;

dev_dbg(&pl08x->adev->dev, "%s prepare transaction of %d bytes from %s\n",
- __func__, sgl->length, plchan->name);
+ __func__, sg_dma_len(sgl), plchan->name);

txd = pl08x_get_txd(plchan, flags);
if (!txd) {
diff --git a/drivers/dma/coh901318.c b/drivers/dma/coh901318.c
index dc89455..c0b650c 100644
--- a/drivers/dma/coh901318.c
+++ b/drivers/dma/coh901318.c
@@ -1040,7 +1040,7 @@ coh901318_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,

if (!sgl)
goto out;
- if (sgl->length == 0)
+ if (sg_dma_len(sgl) == 0)
goto out;

spin_lock_irqsave(&cohc->lock, flg);
diff --git a/drivers/dma/imx-dma.c b/drivers/dma/imx-dma.c
index bb787d8..fcfeb3c 100644
--- a/drivers/dma/imx-dma.c
+++ b/drivers/dma/imx-dma.c
@@ -227,7 +227,7 @@ static inline int imxdma_sg_next(struct imxdma_desc *d)
struct scatterlist *sg = d->sg;
unsigned long now;

- now = min(d->len, sg->length);
+ now = min(d->len, sg_dma_len(sg));
if (d->len != IMX_DMA_LENGTH_LOOP)
d->len -= now;

@@ -763,16 +763,16 @@ static struct dma_async_tx_descriptor *imxdma_prep_slave_sg(
desc = list_first_entry(&imxdmac->ld_free, struct imxdma_desc, node);

for_each_sg(sgl, sg, sg_len, i) {
- dma_length += sg->length;
+ dma_length += sg_dma_len(sg);
}

switch (imxdmac->word_size) {
case DMA_SLAVE_BUSWIDTH_4_BYTES:
- if (sgl->length & 3 || sgl->dma_address & 3)
+ if (sg_dma_len(sgl) & 3 || sgl->dma_address & 3)
return NULL;
break;
case DMA_SLAVE_BUSWIDTH_2_BYTES:
- if (sgl->length & 1 || sgl->dma_address & 1)
+ if (sg_dma_len(sgl) & 1 || sgl->dma_address & 1)
return NULL;
break;
case DMA_SLAVE_BUSWIDTH_1_BYTE:
@@ -831,13 +831,13 @@ static struct dma_async_tx_descriptor *imxdma_prep_dma_cyclic(
imxdmac->sg_list[i].page_link = 0;
imxdmac->sg_list[i].offset = 0;
imxdmac->sg_list[i].dma_address = dma_addr;
- imxdmac->sg_list[i].length = period_len;
+ sg_dma_len(&imxdmac->sg_list[i]) = period_len;
dma_addr += period_len;
}

/* close the loop */
imxdmac->sg_list[periods].offset = 0;
- imxdmac->sg_list[periods].length = 0;
+ sg_dma_len(&imxdmac->sg_list[periods]) = 0;
imxdmac->sg_list[periods].page_link =
((unsigned long)imxdmac->sg_list | 0x01) & ~0x02;

diff --git a/drivers/dma/imx-sdma.c b/drivers/dma/imx-sdma.c
index d3e38e2..a5452da 100644
--- a/drivers/dma/imx-sdma.c
+++ b/drivers/dma/imx-sdma.c
@@ -938,7 +938,7 @@ static struct dma_async_tx_descriptor *sdma_prep_slave_sg(

bd->buffer_addr = sg->dma_address;

- count = sg->length;
+ count = sg_dma_len(sg);

if (count > 0xffff) {
dev_err(sdma->dev, "SDMA channel %d: maximum bytes for sg entry exceeded: %d > %d\n",
diff --git a/drivers/dma/intel_mid_dma.c b/drivers/dma/intel_mid_dma.c
index d0ef593..222e907 100644
--- a/drivers/dma/intel_mid_dma.c
+++ b/drivers/dma/intel_mid_dma.c
@@ -394,7 +394,7 @@ static int midc_lli_fill_sg(struct intel_mid_dma_chan *midc,
}
}
/*Populate CTL_HI values*/
- ctl_hi.ctlx.block_ts = get_block_ts(sg->length,
+ ctl_hi.ctlx.block_ts = get_block_ts(sg_dma_len(sg),
desc->width,
midc->dma->block_size);
/*Populate SAR and DAR values*/
@@ -747,7 +747,7 @@ static struct dma_async_tx_descriptor *intel_mid_dma_prep_slave_sg(
txd = intel_mid_dma_prep_memcpy(chan,
mids->dma_slave.dst_addr,
mids->dma_slave.src_addr,
- sgl->length,
+ sg_dma_len(sgl),
flags);
return txd;
} else {
@@ -759,7 +759,7 @@ static struct dma_async_tx_descriptor *intel_mid_dma_prep_slave_sg(
pr_debug("MDMA: SG Length = %d, direction = %d, Flags = %#lx\n",
sg_len, direction, flags);

- txd = intel_mid_dma_prep_memcpy(chan, 0, 0, sgl->length, flags);
+ txd = intel_mid_dma_prep_memcpy(chan, 0, 0, sg_dma_len(sgl), flags);
if (NULL == txd) {
pr_err("MDMA: Prep memcpy failed\n");
return NULL;
diff --git a/drivers/dma/mxs-dma.c b/drivers/dma/mxs-dma.c
index 655d4ce..3db3a48 100644
--- a/drivers/dma/mxs-dma.c
+++ b/drivers/dma/mxs-dma.c
@@ -415,9 +415,9 @@ static struct dma_async_tx_descriptor *mxs_dma_prep_slave_sg(
ccw->bits |= BF_CCW(MXS_DMA_CMD_NO_XFER, COMMAND);
} else {
for_each_sg(sgl, sg, sg_len, i) {
- if (sg->length > MAX_XFER_BYTES) {
+ if (sg_dma_len(sg) > MAX_XFER_BYTES) {
dev_err(mxs_dma->dma_device.dev, "maximum bytes for sg entry exceeded: %d > %d\n",
- sg->length, MAX_XFER_BYTES);
+ sg_dma_len(sg), MAX_XFER_BYTES);
goto err_out;
}

@@ -425,7 +425,7 @@ static struct dma_async_tx_descriptor *mxs_dma_prep_slave_sg(

ccw->next = mxs_chan->ccw_phys + sizeof(*ccw) * idx;
ccw->bufaddr = sg->dma_address;
- ccw->xfer_bytes = sg->length;
+ ccw->xfer_bytes = sg_dma_len(sg);

ccw->bits = 0;
ccw->bits |= CCW_CHAIN;
diff --git a/drivers/dma/ste_dma40.c b/drivers/dma/ste_dma40.c
index 2ed1ac3..000d309 100644
--- a/drivers/dma/ste_dma40.c
+++ b/drivers/dma/ste_dma40.c
@@ -2362,7 +2362,7 @@ dma40_prep_dma_cyclic(struct dma_chan *chan, dma_addr_t dma_addr,
}

sg[periods].offset = 0;
- sg[periods].length = 0;
+ sg_dma_len(&sg[periods]) = 0;
sg[periods].page_link =
((unsigned long)sg | 0x01) & ~0x02;
--
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Russell King - ARM Linux
2012-04-25 19:00:02 UTC
Permalink
dmaengine drivers should always use sg_dma_address instead of sg_phys to get the
addresses for the transfer from a sg element.
Great, but please spell my name with two l's in both of your patch
descriptions.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Linus Walleij
2012-04-26 21:30:02 UTC
Permalink
dmaengine drivers should always use sg_dma_address instead of sg_phys to get the
addresses for the transfer from a sg element.
All three patches look correct to me (cannot test right now though).
Acked-by: Linus Walleij <***@linaro.org>

Thanks,
Linus Walleij
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Vinod Koul
2012-05-11 08:40:02 UTC
Permalink
Post by Linus Walleij
dmaengine drivers should always use sg_dma_address instead of sg_phys to get the
addresses for the transfer from a sg element.
All three patches look correct to me (cannot test right now though).
Thanks applied all 3.
--
~Vinod

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Loading...