给出两个数据帧:
df1 = data.frame(CustomerId = c(1:6), Product = c(rep("Toaster", 3), rep("Radio", 3)))
df2 = data.frame(CustomerId = c(2, 4, 6), State = c(rep("Alabama", 2), rep("Ohio", 1)))
df1
# CustomerId Product
# 1 Toaster
# 2 Toaster
# 3 Toaster
# 4 Radio
# 5 Radio
# 6 Radio
df2
# CustomerId State
# 2 Alabama
# 4 Alabama
# 6 Ohio
如何执行数据库样式(即 sql样式联接)?也就是说,我如何获得:
df1
的内部联接和df2
:df1
的外部联接和df2
:df2
df1
右外部联接>和
df2
返回右侧表中的所有行,以及左侧表中具有匹配键的所有行。
额外功劳:
如何执行SQL样式选择语句?
通过使用merge
函数及其可选参数:
内部联接: merge(df1,df2)
将适用于这些示例,因为R会自动使用通用变量名称联接框架,但是您很可能希望指定merge(df1,df2,by="CustomerId")
以确保仅在所需的字段上进行匹配。如果匹配的变量在不同的数据帧中具有不同的名称,则也可以使用by.x
和by.y
参数。
外部联接: 合并(x=df1,y=df2,by="CustomerId",全部=TRUE)
左外部: merge(x=df1,y=df2,by="CustomerId",全部.x=TRUE)
< / p>
右外部: merge(x=df1,y=df2,by="CustomerId",all.y=TRUE)
< / p>
交叉连接: merge(x=df1,y=df2,by=NULL)
<罢工> 与内部联接一样,您可能希望将" CustomerId"作为匹配变量显式传递给R。 我认为几乎总是最好明确声明要合并的标识符;如果输入data.frames发生意外更改,则更安全,以后更易于阅读。 罢工>
您可以通过为by
提供一个向量来合并多个列,例如,by=c("CustomerId","OrderId")
。
如果要合并的列名称不同,则可以指定,例如,by.x="CustomerId_in_df1",by.y="CustomerId_in_df2"
其中CustomerId_in_df1
是第一个数据框中的列名称,而CustomerId_in_df2
是第二个数据框中的列名称。 (如果您需要在多列上合并,这些也可以是向量。)
我建议您检出 Gabor Grothendieck的sqldf软件包,这样您就可以用SQL表示这些操作。
library(sqldf)
## inner join
df3 <- sqldf("SELECT CustomerId, Product, State
FROM df1
JOIN df2 USING(CustomerID)")
## left join (substitute 'right' for right join)
df4 <- sqldf("SELECT CustomerId, Product, State
FROM df1
LEFT JOIN df2 USING(CustomerID)")
我发现SQL语法比R语法更简单,更自然(但这可能只反映了我的RDBMS偏见)。
有关连接的更多信息,请参见 Gabor的sqldf GitHub 。
内部连接具有 data.table 方法,该方法非常节省时间和内存(对于某些较大的data.frames是必需的):
library(data.table)
dt1 <- data.table(df1, key = "CustomerId")
dt2 <- data.table(df2, key = "CustomerId")
joined.dt1.dt.2 <- dt1[dt2]
merge
也适用于data.tables(因为它是通用的,并调用merge.data.table
)
merge(dt1, dt2)
在stackoverflow上记录的data.table:
如何进行data.table合并操作
将外键上的SQL连接转换为R data.table语法
有效的替代方案,用于合并更大的数据。框架R
join函数。 noreferrer"> plyr 软件包
library(plyr)
join(df1, df2,
type = "inner")
# CustomerId Product State
# 1 2 Toaster Alabama
# 2 4 Radio Alabama
# 3 6 Radio Ohio
type
的选项:内部
,左
,右
,完整
。
来自?join
:与merge
不同,无论使用哪种连接类型,[join
]都会保留x的顺序。
您也可以使用Hadley Wickham出色的 dplyr 软件包进行联接。
library(dplyr)
#make sure that CustomerId cols are both type numeric
#they ARE not using the provided code in question and dplyr will complain
df1$CustomerId <- as.numeric(df1$CustomerId)
df2$CustomerId <- as.numeric(df2$CustomerId)
#inner
inner_join(df1, df2)
#left outer
left_join(df1, df2)
#right outer
right_join(df1, df2)
#alternate right outer
left_join(df2, df1)
#full join
full_join(df1, df2)
semi_join(df1, df2) #keep only observations in df1 that match in df2.
anti_join(df1, df2) #drops all observations in df1 that match in df2.
在 R Wiki 。我会在这里偷几个:
合并方法
由于您的键名相同,所以进行内部联接的简短方法是merge():
merge(df1,df2)
可以使用" all"关键字创建完整的内部联接(两个表中的所有记录):
merge(df1,df2, all=TRUE)
df1和df2的左外部联接:
merge(df1,df2, all.x=TRUE)
df1和df2的右外部联接:
merge(df1,df2, all.y=TRUE)
您可以向下翻转'em','slap'em和'rub'em以获得您询问的其他两个外部联接:)
下标方法
使用下标方法在左侧具有df1的左外部联接将是:
df1[,"State"]<-df2[df1[ ,"Product"], "State"]
可以通过杂凑左侧的外部联接下标示例来创建外部联接的其他组合。 (是的,我知道这相当于说"我将其留给读者练习……")
2014年的新功能:
尤其是如果您也对一般的数据操作(包括排序,过滤,子集,汇总等)感兴趣,则一定要看一下dplyr
,它附带了各种所有功能均旨在简化数据帧和某些其他数据库类型的工作。它甚至提供了相当复杂的SQL界面,甚至提供了将(大多数)SQL代码直接转换为R的功能。
dplyr软件包中与联接有关的四个功能是(引用):
inner_join(x,y,by=NULL,copy=FALSE,...)
:返回y中有匹配值的x中的所有行,以及x和y中的所有列< / li> left_join(x,y,by=NULL,copy=FALSE,...)
:返回x中的所有行以及x和y中的所有列semi_join(x,y,by=NULL,copy=FALSE,...)
:返回x中的所有行,其中y中有匹配值,仅保留x中的列。 anti_join(x,y,by=NULL,copy=FALSE,...)
:返回x中的所有行,其中y中没有匹配的值,仅保留x中的列此处非常详细。< / p>
选择列可以通过select(df,"column")
完成。如果这还不足以满足您的SQL需求,则提供sql()
函数,您可以按原样输入SQL代码,并且它将执行您指定的操作,就像在编写时一样一直以来都是R(有关详细信息,请参阅 dplyr / databases小插图)。例如,如果正确应用,则sql("SELECT*FROMhflights")
将从" hflights" dplyr表(" tbl")中选择所有列。
有关用于连接数据集的data.table方法的更新。有关每种连接类型,请参见以下示例。有两种方法,一种是从[.data.table
传递第二个data.table作为子集的第一个参数时,另一种方法是使用merge
函数将其快速调度data.table方法。
df1 = data.frame(CustomerId = c(1:6), Product = c(rep("Toaster", 3), rep("Radio", 3)))
df2 = data.frame(CustomerId = c(2L, 4L, 7L), State = c(rep("Alabama", 2), rep("Ohio", 1))) # one value changed to show full outer join
library(data.table)
dt1 = as.data.table(df1)
dt2 = as.data.table(df2)
setkey(dt1, CustomerId)
setkey(dt2, CustomerId)
# right outer join keyed data.tables
dt1[dt2]
setkey(dt1, NULL)
setkey(dt2, NULL)
# right outer join unkeyed data.tables - use `on` argument
dt1[dt2, on = "CustomerId"]
# left outer join - swap dt1 with dt2
dt2[dt1, on = "CustomerId"]
# inner join - use `nomatch` argument
dt1[dt2, nomatch=NULL, on = "CustomerId"]
# anti join - use `!` operator
dt1[!dt2, on = "CustomerId"]
# inner join - using merge method
merge(dt1, dt2, by = "CustomerId")
# full outer join
merge(dt1, dt2, by = "CustomerId", all = TRUE)
# see ?merge.data.table arguments for other cases
下面的基准测试基于R,sqldf,dplyr和data.table。
基准测试未加密/未索引的数据集。基准测试是对50M-1行数据集执行的,联接列上有50M-2个通用值,因此可以测试每种情况(内部,左,右,满),联接仍然不容易执行。这种连接类型很好地强调了连接算法。时间从sqldf:0.4.11
,dplyr:0.7.8
和data.table:1.12.0
开始。
# inner
Unit: seconds
expr min lq mean median uq max neval
base 111.66266 111.66266 111.66266 111.66266 111.66266 111.66266 1
sqldf 624.88388 624.88388 624.88388 624.88388 624.88388 624.88388 1
dplyr 51.91233 51.91233 51.91233 51.91233 51.91233 51.91233 1
DT 10.40552 10.40552 10.40552 10.40552 10.40552 10.40552 1
# left
Unit: seconds
expr min lq mean median uq max
base 142.782030 142.782030 142.782030 142.782030 142.782030 142.782030
sqldf 613.917109 613.917109 613.917109 613.917109 613.917109 613.917109
dplyr 49.711912 49.711912 49.711912 49.711912 49.711912 49.711912
DT 9.674348 9.674348 9.674348 9.674348 9.674348 9.674348
# right
Unit: seconds
expr min lq mean median uq max
base 122.366301 122.366301 122.366301 122.366301 122.366301 122.366301
sqldf 611.119157 611.119157 611.119157 611.119157 611.119157 611.119157
dplyr 50.384841 50.384841 50.384841 50.384841 50.384841 50.384841
DT 9.899145 9.899145 9.899145 9.899145 9.899145 9.899145
# full
Unit: seconds
expr min lq mean median uq max neval
base 141.79464 141.79464 141.79464 141.79464 141.79464 141.79464 1
dplyr 94.66436 94.66436 94.66436 94.66436 94.66436 94.66436 1
DT 21.62573 21.62573 21.62573 21.62573 21.62573 21.62573 1
请注意,您还可以使用data.table
执行其他类型的联接:
-联接上更新-如果您要在另一个表中查找值到主表中,
-在联接上聚合-如果您想汇总要加入的密钥,则不必实现所有加入结果
-重叠的加入-如果您要按范围合并
-滚动联接-如果您希望合并能够匹配通过向前或向后滚动将它们排在前/后行中的值
-非等联接-如果您的联接条件不相等
要复制的代码:
library(microbenchmark)
library(sqldf)
library(dplyr)
library(data.table)
sapply(c("sqldf","dplyr","data.table"), packageVersion, simplify=FALSE)
n = 5e7
set.seed(108)
df1 = data.frame(x=sample(n,n-1L), y1=rnorm(n-1L))
df2 = data.frame(x=sample(n,n-1L), y2=rnorm(n-1L))
dt1 = as.data.table(df1)
dt2 = as.data.table(df2)
mb = list()
# inner join
microbenchmark(times = 1L,
base = merge(df1, df2, by = "x"),
sqldf = sqldf("SELECT * FROM df1 INNER JOIN df2 ON df1.x = df2.x"),
dplyr = inner_join(df1, df2, by = "x"),
DT = dt1[dt2, nomatch=NULL, on = "x"]) -> mb$inner
# left outer join
microbenchmark(times = 1L,
base = merge(df1, df2, by = "x", all.x = TRUE),
sqldf = sqldf("SELECT * FROM df1 LEFT OUTER JOIN df2 ON df1.x = df2.x"),
dplyr = left_join(df1, df2, by = c("x"="x")),
DT = dt2[dt1, on = "x"]) -> mb$left
# right outer join
microbenchmark(times = 1L,
base = merge(df1, df2, by = "x", all.y = TRUE),
sqldf = sqldf("SELECT * FROM df2 LEFT OUTER JOIN df1 ON df2.x = df1.x"),
dplyr = right_join(df1, df2, by = "x"),
DT = dt1[dt2, on = "x"]) -> mb$right
# full outer join
microbenchmark(times = 1L,
base = merge(df1, df2, by = "x", all = TRUE),
dplyr = full_join(df1, df2, by = "x"),
DT = merge(dt1, dt2, by = "x", all = TRUE)) -> mb$full
lapply(mb, print) -> nul
dplyr自0.4起实现了所有这些联接,包括outer_join
,但值得注意的是在0.4之前的前几个版本中,它不提供outer_join
,结果很长一段时间之后,有很多非常糟糕的变通办法用户代码浮动了(那时候您仍然可以在SO,Kaggle的答案,github中找到这样的代码。因此,此答案仍然有用。 )
与加入相关的发布要点:
v0.1.3 (2014年4月4日)
该问题中每个hadley的评论的解决方法:
在连接两个数据帧时,我惊奇地发现merge(...,all.x=TRUE,all.y=TRUE)
比dplyr::full_join()
更快。这是dplyr v0.4
合并大约需要17秒,full_join需要大约65秒。
有些吃,因为我通常默认使用dplyr进行操作。
对于左连接为0..*:0..1
基数的情况或右连接为0..1:0..*
基数,可以将连接器(0..1
表)中的单边列直接分配到连接对象(0..*
表)中,从而避免创建全新的数据表。这要求将参与者的键列匹配到该参与者中,并为该分配相应地索引并排序该参与者的行。
如果键是单个列,那么我们可以使用一次调用 match()
进行匹配。我将在此答案中介绍这种情况。
这是一个基于OP的示例,除了我在ID为7的df2
中添加了额外的一行,以测试连接器中不匹配键的情况。这实际上是df1
左联接df2
:
df1 <- data.frame(CustomerId=1:6,Product=c(rep('Toaster',3L),rep('Radio',3L)));
df2 <- data.frame(CustomerId=c(2L,4L,6L,7L),State=c(rep('Alabama',2L),'Ohio','Texas'));
df1[names(df2)[-1L]] <- df2[match(df1[,1L],df2[,1L]),-1L];
df1;
## CustomerId Product State
## 1 1 Toaster <NA>
## 2 2 Toaster Alabama
## 3 3 Toaster <NA>
## 4 4 Radio Alabama
## 5 5 Radio <NA>
## 6 6 Radio Ohio
在上面,我对键列是两个输入表的第一列的假设进行了硬编码。我认为这通常不是一个不合理的假设,因为如果您有一个带有键列的data.frame,那么如果没有将它设置为data.frame的第一列,那将很奇怪。一开始。而且,您可以随时对列进行重新排序。这种假设的一个有利结果是,尽管我认为只是将一个假设替换为另一个假设,但不必对键列的名称进行硬编码。简洁是整数索引以及速度的另一个优点。在下面的基准测试中,我将更改实现以使用字符串名称索引来匹配竞争的实现。
如果您要对单个大表保留几个表,我认为这是一个特别合适的解决方案。对于每次合并,重复重建整个表都是不必要的,而且效率低下。
另一方面,如果出于任何原因需要参加者通过此操作保持不变,则不能使用此解决方案,因为它直接修改了参加者。尽管在这种情况下,您可以简单地制作副本并在副本上执行就地分配。
作为旁注,我简要研究了多列键的可能匹配解决方案。不幸的是,我找到的唯一匹配的解决方案是:
match(interaction(interaction(df1$a,df1$b),interaction(df2$a,df2$b))
,或与paste()
相同的想法。外层(df1$a,df2$a,`==`)和外层(df1$b,df2$b,`==`)
。
merge()
和等效的基于包的合并函数,它们始终分配一个新表以返回合并的结果,因此不适合就地基于分配的解决方案。 例如,请参见在不同的数据框架上匹配多个列,并获得其他列作为结果,将两列与其他两列匹配,匹配多列,以及我最初提出的就地解决方案在R 中合并两个具有不同行数的数据帧。
我决定做自己的基准测试,以了解就地分配方法与该问题中提供的其他解决方案的比较。
测试代码:
library(microbenchmark);
library(data.table);
library(sqldf);
library(plyr);
library(dplyr);
solSpecs <- list(
merge=list(testFuncs=list(
inner=function(df1,df2,key) merge(df1,df2,key),
left =function(df1,df2,key) merge(df1,df2,key,all.x=T),
right=function(df1,df2,key) merge(df1,df2,key,all.y=T),
full =function(df1,df2,key) merge(df1,df2,key,all=T)
)),
data.table.unkeyed=list(argSpec='data.table.unkeyed',testFuncs=list(
inner=function(dt1,dt2,key) dt1[dt2,on=key,nomatch=0L,allow.cartesian=T],
left =function(dt1,dt2,key) dt2[dt1,on=key,allow.cartesian=T],
right=function(dt1,dt2,key) dt1[dt2,on=key,allow.cartesian=T],
full =function(dt1,dt2,key) merge(dt1,dt2,key,all=T,allow.cartesian=T) ## calls merge.data.table()
)),
data.table.keyed=list(argSpec='data.table.keyed',testFuncs=list(
inner=function(dt1,dt2) dt1[dt2,nomatch=0L,allow.cartesian=T],
left =function(dt1,dt2) dt2[dt1,allow.cartesian=T],
right=function(dt1,dt2) dt1[dt2,allow.cartesian=T],
full =function(dt1,dt2) merge(dt1,dt2,all=T,allow.cartesian=T) ## calls merge.data.table()
)),
sqldf.unindexed=list(testFuncs=list( ## note: must pass connection=NULL to avoid running against the live DB connection, which would result in collisions with the residual tables from the last query upload
inner=function(df1,df2,key) sqldf(paste0('select * from df1 inner join df2 using(',paste(collapse=',',key),')'),connection=NULL),
left =function(df1,df2,key) sqldf(paste0('select * from df1 left join df2 using(',paste(collapse=',',key),')'),connection=NULL),
right=function(df1,df2,key) sqldf(paste0('select * from df2 left join df1 using(',paste(collapse=',',key),')'),connection=NULL) ## can't do right join proper, not yet supported; inverted left join is equivalent
##full =function(df1,df2,key) sqldf(paste0('select * from df1 full join df2 using(',paste(collapse=',',key),')'),connection=NULL) ## can't do full join proper, not yet supported; possible to hack it with a union of left joins, but too unreasonable to include in testing
)),
sqldf.indexed=list(testFuncs=list( ## important: requires an active DB connection with preindexed main.df1 and main.df2 ready to go; arguments are actually ignored
inner=function(df1,df2,key) sqldf(paste0('select * from main.df1 inner join main.df2 using(',paste(collapse=',',key),')')),
left =function(df1,df2,key) sqldf(paste0('select * from main.df1 left join main.df2 using(',paste(collapse=',',key),')')),
right=function(df1,df2,key) sqldf(paste0('select * from main.df2 left join main.df1 using(',paste(collapse=',',key),')')) ## can't do right join proper, not yet supported; inverted left join is equivalent
##full =function(df1,df2,key) sqldf(paste0('select * from main.df1 full join main.df2 using(',paste(collapse=',',key),')')) ## can't do full join proper, not yet supported; possible to hack it with a union of left joins, but too unreasonable to include in testing
)),
plyr=list(testFuncs=list(
inner=function(df1,df2,key) join(df1,df2,key,'inner'),
left =function(df1,df2,key) join(df1,df2,key,'left'),
right=function(df1,df2,key) join(df1,df2,key,'right'),
full =function(df1,df2,key) join(df1,df2,key,'full')
)),
dplyr=list(testFuncs=list(
inner=function(df1,df2,key) inner_join(df1,df2,key),
left =function(df1,df2,key) left_join(df1,df2,key),
right=function(df1,df2,key) right_join(df1,df2,key),
full =function(df1,df2,key) full_join(df1,df2,key)
)),
in.place=list(testFuncs=list(
left =function(df1,df2,key) { cns <- setdiff(names(df2),key); df1[cns] <- df2[match(df1[,key],df2[,key]),cns]; df1; },
right=function(df1,df2,key) { cns <- setdiff(names(df1),key); df2[cns] <- df1[match(df2[,key],df1[,key]),cns]; df2; }
))
);
getSolTypes <- function() names(solSpecs);
getJoinTypes <- function() unique(unlist(lapply(solSpecs,function(x) names(x$testFuncs))));
getArgSpec <- function(argSpecs,key=NULL) if (is.null(key)) argSpecs$default else argSpecs[[key]];
initSqldf <- function() {
sqldf(); ## creates sqlite connection on first run, cleans up and closes existing connection otherwise
if (exists('sqldfInitFlag',envir=globalenv(),inherits=F) && sqldfInitFlag) { ## false only on first run
sqldf(); ## creates a new connection
} else {
assign('sqldfInitFlag',T,envir=globalenv()); ## set to true for the one and only time
}; ## end if
invisible();
}; ## end initSqldf()
setUpBenchmarkCall <- function(argSpecs,joinType,solTypes=getSolTypes(),env=parent.frame()) {
## builds and returns a list of expressions suitable for passing to the list argument of microbenchmark(), and assigns variables to resolve symbol references in those expressions
callExpressions <- list();
nms <- character();
for (solType in solTypes) {
testFunc <- solSpecs[[solType]]$testFuncs[[joinType]];
if (is.null(testFunc)) next; ## this join type is not defined for this solution type
testFuncName <- paste0('tf.',solType);
assign(testFuncName,testFunc,envir=env);
argSpecKey <- solSpecs[[solType]]$argSpec;
argSpec <- getArgSpec(argSpecs,argSpecKey);
argList <- setNames(nm=names(argSpec$args),vector('list',length(argSpec$args)));
for (i in seq_along(argSpec$args)) {
argName <- paste0('tfa.',argSpecKey,i);
assign(argName,argSpec$args[[i]],envir=env);
argList[[i]] <- if (i%in%argSpec$copySpec) call('copy',as.symbol(argName)) else as.symbol(argName);
}; ## end for
callExpressions[[length(callExpressions)+1L]] <- do.call(call,c(list(testFuncName),argList),quote=T);
nms[length(nms)+1L] <- solType;
}; ## end for
names(callExpressions) <- nms;
callExpressions;
}; ## end setUpBenchmarkCall()
harmonize <- function(res) {
res <- as.data.frame(res); ## coerce to data.frame
for (ci in which(sapply(res,is.factor))) res[[ci]] <- as.character(res[[ci]]); ## coerce factor columns to character
for (ci in which(sapply(res,is.logical))) res[[ci]] <- as.integer(res[[ci]]); ## coerce logical columns to integer (works around sqldf quirk of munging logicals to integers)
##for (ci in which(sapply(res,inherits,'POSIXct'))) res[[ci]] <- as.double(res[[ci]]); ## coerce POSIXct columns to double (works around sqldf quirk of losing POSIXct class) ----- POSIXct doesn't work at all in sqldf.indexed
res <- res[order(names(res))]; ## order columns
res <- res[do.call(order,res),]; ## order rows
res;
}; ## end harmonize()
checkIdentical <- function(argSpecs,solTypes=getSolTypes()) {
for (joinType in getJoinTypes()) {
callExpressions <- setUpBenchmarkCall(argSpecs,joinType,solTypes);
if (length(callExpressions)<2L) next;
ex <- harmonize(eval(callExpressions[[1L]]));
for (i in seq(2L,len=length(callExpressions)-1L)) {
y <- harmonize(eval(callExpressions[[i]]));
if (!isTRUE(all.equal(ex,y,check.attributes=F))) {
ex <<- ex;
y <<- y;
solType <- names(callExpressions)[i];
stop(paste0('non-identical: ',solType,' ',joinType,'.'));
}; ## end if
}; ## end for
}; ## end for
invisible();
}; ## end checkIdentical()
testJoinType <- function(argSpecs,joinType,solTypes=getSolTypes(),metric=NULL,times=100L) {
callExpressions <- setUpBenchmarkCall(argSpecs,joinType,solTypes);
bm <- microbenchmark(list=callExpressions,times=times);
if (is.null(metric)) return(bm);
bm <- summary(bm);
res <- setNames(nm=names(callExpressions),bm[[metric]]);
attr(res,'unit') <- attr(bm,'unit');
res;
}; ## end testJoinType()
testAllJoinTypes <- function(argSpecs,solTypes=getSolTypes(),metric=NULL,times=100L) {
joinTypes <- getJoinTypes();
resList <- setNames(nm=joinTypes,lapply(joinTypes,function(joinType) testJoinType(argSpecs,joinType,solTypes,metric,times)));
if (is.null(metric)) return(resList);
units <- unname(unlist(lapply(resList,attr,'unit')));
res <- do.call(data.frame,c(list(join=joinTypes),setNames(nm=solTypes,rep(list(rep(NA_real_,length(joinTypes))),length(solTypes))),list(unit=units,stringsAsFactors=F)));
for (i in seq_along(resList)) res[i,match(names(resList[[i]]),names(res))] <- resList[[i]];
res;
}; ## end testAllJoinTypes()
testGrid <- function(makeArgSpecsFunc,sizes,overlaps,solTypes=getSolTypes(),joinTypes=getJoinTypes(),metric='median',times=100L) {
res <- expand.grid(size=sizes,overlap=overlaps,joinType=joinTypes,stringsAsFactors=F);
res[solTypes] <- NA_real_;
res$unit <- NA_character_;
for (ri in seq_len(nrow(res))) {
size <- res$size[ri];
overlap <- res$overlap[ri];
joinType <- res$joinType[ri];
argSpecs <- makeArgSpecsFunc(size,overlap);
checkIdentical(argSpecs,solTypes);
cur <- testJoinType(argSpecs,joinType,solTypes,metric,times);
res[ri,match(names(cur),names(res))] <- cur;
res$unit[ri] <- attr(cur,'unit');
}; ## end for
res;
}; ## end testGrid()
这是我之前演示的基于OP的示例的基准:
## OP's example, supplemented with a non-matching row in df2
argSpecs <- list(
default=list(copySpec=1:2,args=list(
df1 <- data.frame(CustomerId=1:6,Product=c(rep('Toaster',3L),rep('Radio',3L))),
df2 <- data.frame(CustomerId=c(2L,4L,6L,7L),State=c(rep('Alabama',2L),'Ohio','Texas')),
'CustomerId'
)),
data.table.unkeyed=list(copySpec=1:2,args=list(
as.data.table(df1),
as.data.table(df2),
'CustomerId'
)),
data.table.keyed=list(copySpec=1:2,args=list(
setkey(as.data.table(df1),CustomerId),
setkey(as.data.table(df2),CustomerId)
))
);
## prepare sqldf
initSqldf();
sqldf('create index df1_key on df1(CustomerId);'); ## upload and create an sqlite index on df1
sqldf('create index df2_key on df2(CustomerId);'); ## upload and create an sqlite index on df2
checkIdentical(argSpecs);
testAllJoinTypes(argSpecs,metric='median');
## join merge data.table.unkeyed data.table.keyed sqldf.unindexed sqldf.indexed plyr dplyr in.place unit
## 1 inner 644.259 861.9345 923.516 9157.752 1580.390 959.2250 270.9190 NA microseconds
## 2 left 713.539 888.0205 910.045 8820.334 1529.714 968.4195 270.9185 224.3045 microseconds
## 3 right 1221.804 909.1900 923.944 8930.668 1533.135 1063.7860 269.8495 218.1035 microseconds
## 4 full 1302.203 3107.5380 3184.729 NA NA 1593.6475 270.7055 NA microseconds
在这里我以随机输入数据为基准,尝试在两个输入表之间使用不同的比例和不同的键重叠模式。此基准仍然仅限于单列整数键的情况。同样,为了确保就地解决方案适用于同一张表的左右联接,所有随机测试数据都使用0..1:0..1
基数。这是通过在生成第二个data.frame的关键列时进行采样而无需替换第一个data.frame的关键列来实现的。
makeArgSpecs.singleIntegerKey.optionalOneToOne <- function(size,overlap) {
com <- as.integer(size*overlap);
argSpecs <- list(
default=list(copySpec=1:2,args=list(
df1 <- data.frame(id=sample(size),y1=rnorm(size),y2=rnorm(size)),
df2 <- data.frame(id=sample(c(if (com>0L) sample(df1$id,com) else integer(),seq(size+1L,len=size-com))),y3=rnorm(size),y4=rnorm(size)),
'id'
)),
data.table.unkeyed=list(copySpec=1:2,args=list(
as.data.table(df1),
as.data.table(df2),
'id'
)),
data.table.keyed=list(copySpec=1:2,args=list(
setkey(as.data.table(df1),id),
setkey(as.data.table(df2),id)
))
);
## prepare sqldf
initSqldf();
sqldf('create index df1_key on df1(id);'); ## upload and create an sqlite index on df1
sqldf('create index df2_key on df2(id);'); ## upload and create an sqlite index on df2
argSpecs;
}; ## end makeArgSpecs.singleIntegerKey.optionalOneToOne()
## cross of various input sizes and key overlaps
sizes <- c(1e1L,1e3L,1e6L);
overlaps <- c(0.99,0.5,0.01);
system.time({ res <- testGrid(makeArgSpecs.singleIntegerKey.optionalOneToOne,sizes,overlaps); });
## user system elapsed
## 22024.65 12308.63 34493.19
我写了一些代码来创建上述结果的对数图。我为每个重叠百分比生成了一个单独的图。有点混乱,但是我喜欢在同一图中表示所有解决方案类型和联接类型。
我用样条插值法显示了每个解决方案/联接类型组合的平滑曲线,并用单独的pch符号绘制。连接类型由pch符号捕获,内部圆括号,左侧和右侧尖括号分别表示左和右,圆点用菱形表示。解决方案类型由图例中显示的颜色捕获。
plotRes <- function(res,titleFunc,useFloor=F) {
solTypes <- setdiff(names(res),c('size','overlap','joinType','unit')); ## derive from res
normMult <- c(microseconds=1e-3,milliseconds=1); ## normalize to milliseconds
joinTypes <- getJoinTypes();
cols <- c(merge='purple',data.table.unkeyed='blue',data.table.keyed='#00DDDD',sqldf.unindexed='brown',sqldf.indexed='orange',plyr='red',dplyr='#00BB00',in.place='magenta');
pchs <- list(inner=20L,left='<',right='>',full=23L);
cexs <- c(inner=0.7,left=1,right=1,full=0.7);
NP <- 60L;
ord <- order(decreasing=T,colMeans(res[res$size==max(res$size),solTypes],na.rm=T));
ymajors <- data.frame(y=c(1,1e3),label=c('1ms','1s'),stringsAsFactors=F);
for (overlap in unique(res$overlap)) {
x1 <- res[res$overlap==overlap,];
x1[solTypes] <- x1[solTypes]*normMult[x1$unit]; x1$unit <- NULL;
xlim <- c(1e1,max(x1$size));
xticks <- 10^seq(log10(xlim[1L]),log10(xlim[2L]));
ylim <- c(1e-1,10^((if (useFloor) floor else ceiling)(log10(max(x1[solTypes],na.rm=T))))); ## use floor() to zoom in a little more, only sqldf.unindexed will break above, but xpd=NA will keep it visible
yticks <- 10^seq(log10(ylim[1L]),log10(ylim[2L]));
yticks.minor <- rep(yticks[-length(yticks)],each=9L)*1:9;
plot(NA,xlim=xlim,ylim=ylim,xaxs='i',yaxs='i',axes=F,xlab='size (rows)',ylab='time (ms)',log='xy');
abline(v=xticks,col='lightgrey');
abline(h=yticks.minor,col='lightgrey',lty=3L);
abline(h=yticks,col='lightgrey');
axis(1L,xticks,parse(text=sprintf('10^%d',as.integer(log10(xticks)))));
axis(2L,yticks,parse(text=sprintf('10^%d',as.integer(log10(yticks)))),las=1L);
axis(4L,ymajors$y,ymajors$label,las=1L,tick=F,cex.axis=0.7,hadj=0.5);
for (joinType in rev(joinTypes)) { ## reverse to draw full first, since it's larger and would be more obtrusive if drawn last
x2 <- x1[x1$joinType==joinType,];
for (solType in solTypes) {
if (any(!is.na(x2[[solType]]))) {
xy <- spline(x2$size,x2[[solType]],xout=10^(seq(log10(x2$size[1L]),log10(x2$size[nrow(x2)]),len=NP)));
points(xy$x,xy$y,pch=pchs[[joinType]],col=cols[solType],cex=cexs[joinType],xpd=NA);
}; ## end if
}; ## end for
}; ## end for
## custom legend
## due to logarithmic skew, must do all distance calcs in inches, and convert to user coords afterward
## the bottom-left corner of the legend will be defined in normalized figure coords, although we can convert to inches immediately
leg.cex <- 0.7;
leg.x.in <- grconvertX(0.275,'nfc','in');
leg.y.in <- grconvertY(0.6,'nfc','in');
leg.x.user <- grconvertX(leg.x.in,'in');
leg.y.user <- grconvertY(leg.y.in,'in');
leg.outpad.w.in <- 0.1;
leg.outpad.h.in <- 0.1;
leg.midpad.w.in <- 0.1;
leg.midpad.h.in <- 0.1;
leg.sol.w.in <- max(strwidth(solTypes,'in',leg.cex));
leg.sol.h.in <- max(strheight(solTypes,'in',leg.cex))*1.5; ## multiplication factor for greater line height
leg.join.w.in <- max(strheight(joinTypes,'in',leg.cex))*1.5; ## ditto
leg.join.h.in <- max(strwidth(joinTypes,'in',leg.cex));
leg.main.w.in <- leg.join.w.in*length(joinTypes);
leg.main.h.in <- leg.sol.h.in*length(solTypes);
leg.x2.user <- grconvertX(leg.x.in+leg.outpad.w.in*2+leg.main.w.in+leg.midpad.w.in+leg.sol.w.in,'in');
leg.y2.user <- grconvertY(leg.y.in+leg.outpad.h.in*2+leg.main.h.in+leg.midpad.h.in+leg.join.h.in,'in');
leg.cols.x.user <- grconvertX(leg.x.in+leg.outpad.w.in+leg.join.w.in*(0.5+seq(0L,length(joinTypes)-1L)),'in');
leg.lines.y.user <- grconvertY(leg.y.in+leg.outpad.h.in+leg.main.h.in-leg.sol.h.in*(0.5+seq(0L,length(solTypes)-1L)),'in');
leg.sol.x.user <- grconvertX(leg.x.in+leg.outpad.w.in+leg.main.w.in+leg.midpad.w.in,'in');
leg.join.y.user <- grconvertY(leg.y.in+leg.outpad.h.in+leg.main.h.in+leg.midpad.h.in,'in');
rect(leg.x.user,leg.y.user,leg.x2.user,leg.y2.user,col='white');
text(leg.sol.x.user,leg.lines.y.user,solTypes[ord],cex=leg.cex,pos=4L,offset=0);
text(leg.cols.x.user,leg.join.y.user,joinTypes,cex=leg.cex,pos=4L,offset=0,srt=90); ## srt rotation applies *after* pos/offset positioning
for (i in seq_along(joinTypes)) {
joinType <- joinTypes[i];
points(rep(leg.cols.x.user[i],length(solTypes)),ifelse(colSums(!is.na(x1[x1$joinType==joinType,solTypes[ord]]))==0L,NA,leg.lines.y.user),pch=pchs[[joinType]],col=cols[solTypes[ord]]);
}; ## end for
title(titleFunc(overlap));
readline(sprintf('overlap %.02f',overlap));
}; ## end for
}; ## end plotRes()
titleFunc <- function(overlap) sprintf('R merge solutions: single-column integer key, 0..1:0..1 cardinality, %d%% overlap',as.integer(overlap*100));
plotRes(res,titleFunc,T);
关于键列的数量和类型以及基数,这是第二个更大规模的基准测试。对于此基准测试,我使用三个关键列:一个字符,一个整数和一个逻辑,对基数没有限制(即0..*:0..*
)。 (通常,由于浮点比较的复杂性,不建议将键列定义为双精度或复数值,并且基本上没有人使用原始类型,对于键列而言则少得多,因此我没有在键中包含这些类型另外,出于信息的考虑,我最初尝试通过包含POSIXct键列来使用四个键列,但由于某些原因,POSIXct类型在sqldf.indexed
解决方案中不能很好地发挥作用由于浮点比较异常,所以我将其删除。)
makeArgSpecs.assortedKey.optionalManyToMany <- function(size,overlap,uniquePct=75) {
## number of unique keys in df1
u1Size <- as.integer(size*uniquePct/100);
## (roughly) divide u1Size into bases, so we can use expand.grid() to produce the required number of unique key values with repetitions within individual key columns
## use ceiling() to ensure we cover u1Size; will truncate afterward
u1SizePerKeyColumn <- as.integer(ceiling(u1Size^(1/3)));
## generate the unique key values for df1
keys1 <- expand.grid(stringsAsFactors=F,
idCharacter=replicate(u1SizePerKeyColumn,paste(collapse='',sample(letters,sample(4:12,1L),T))),
idInteger=sample(u1SizePerKeyColumn),
idLogical=sample(c(F,T),u1SizePerKeyColumn,T)
##idPOSIXct=as.POSIXct('2016-01-01 00:00:00','UTC')+sample(u1SizePerKeyColumn)
)[seq_len(u1Size),];
## rbind some repetitions of the unique keys; this will prepare one side of the many-to-many relationship
## also scramble the order afterward
keys1 <- rbind(keys1,keys1[sample(nrow(keys1),size-u1Size,T),])[sample(size),];
## common and unilateral key counts
com <- as.integer(size*overlap);
uni <- size-com;
## generate some unilateral keys for df2 by synthesizing outside of the idInteger range of df1
keys2 <- data.frame(stringsAsFactors=F,
idCharacter=replicate(uni,paste(collapse='',sample(letters,sample(4:12,1L),T))),
idInteger=u1SizePerKeyColumn+sample(uni),
idLogical=sample(c(F,T),uni,T)
##idPOSIXct=as.POSIXct('2016-01-01 00:00:00','UTC')+u1SizePerKeyColumn+sample(uni)
);
## rbind random keys from df1; this will complete the many-to-many relationship
## also scramble the order afterward
keys2 <- rbind(keys2,keys1[sample(nrow(keys1),com,T),])[sample(size),];
##keyNames <- c('idCharacter','idInteger','idLogical','idPOSIXct');
keyNames <- c('idCharacter','idInteger','idLogical');
## note: was going to use raw and complex type for two of the non-key columns, but data.table doesn't seem to fully support them
argSpecs <- list(
default=list(copySpec=1:2,args=list(
df1 <- cbind(stringsAsFactors=F,keys1,y1=sample(c(F,T),size,T),y2=sample(size),y3=rnorm(size),y4=replicate(size,paste(collapse='',sample(letters,sample(4:12,1L),T)))),
df2 <- cbind(stringsAsFactors=F,keys2,y5=sample(c(F,T),size,T),y6=sample(size),y7=rnorm(size),y8=replicate(size,paste(collapse='',sample(letters,sample(4:12,1L),T)))),
keyNames
)),
data.table.unkeyed=list(copySpec=1:2,args=list(
as.data.table(df1),
as.data.table(df2),
keyNames
)),
data.table.keyed=list(copySpec=1:2,args=list(
setkeyv(as.data.table(df1),keyNames),
setkeyv(as.data.table(df2),keyNames)
))
);
## prepare sqldf
initSqldf();
sqldf(paste0('create index df1_key on df1(',paste(collapse=',',keyNames),');')); ## upload and create an sqlite index on df1
sqldf(paste0('create index df2_key on df2(',paste(collapse=',',keyNames),');')); ## upload and create an sqlite index on df2
argSpecs;
}; ## end makeArgSpecs.assortedKey.optionalManyToMany()
sizes <- c(1e1L,1e3L,1e5L); ## 1e5L instead of 1e6L to respect more heavy-duty inputs
overlaps <- c(0.99,0.5,0.01);
solTypes <- setdiff(getSolTypes(),'in.place');
system.time({ res <- testGrid(makeArgSpecs.assortedKey.optionalManyToMany,sizes,overlaps,solTypes); });
## user system elapsed
## 38895.50 784.19 39745.53
使用与以上相同的绘图代码生成的绘图:
titleFunc <- function(overlap) sprintf('R merge solutions: character/integer/logical key, 0..*:0..* cardinality, %d%% overlap',as.integer(overlap*100));
plotRes(res,titleFunc,F);
merge
函数,我们可以选择左表或右表的变量,就像我们都熟悉SQL中的select语句一样(EX:选择a。* ...或选择b。 *来自.....)我们必须添加额外的代码,这些代码将从新加入的表中子集化。
SQL:-从df1中选择a.*在a.CustomerId=b.CustomerId上的内部联接df2b
R:-merge(df1,df2,by.x="CustomerId",by.y="CustomerId")[,names(df1)]
相同的方式
SQL:-从df1中选择b.*在a.CustomerId=b.CustomerId上的内部联接df2b
R:-merge(df1,df2,by.x="CustomerId",by.y="CustomerId")[,names(df2)]
对于所有列上的内部联接,您还可以使用 data.table -package中的fintersect
或中的intersect
> dplyr -package作为merge
的替代方案,而无需指定by
-columns。这将使两个数据帧之间的行相等:
merge(df1, df2)
# V1 V2
# 1 B 2
# 2 C 3
dplyr::intersect(df1, df2)
# V1 V2
# 1 B 2
# 2 C 3
data.table::fintersect(setDT(df1), setDT(df2))
# V1 V2
# 1: B 2
# 2: C 3
示例数据:
df1 <- data.frame(V1 = LETTERS[1:4], V2 = 1:4)
df2 <- data.frame(V1 = LETTERS[2:3], V2 = 2:3)
更新联接。另外一个重要的SQL样式联接是" 更新联接",其中使用另一个表更新(或创建)一个表中的列。
修改OP的示例表...
sales = data.frame(
CustomerId = c(1, 1, 1, 3, 4, 6),
Year = 2000:2005,
Product = c(rep("Toaster", 3), rep("Radio", 3))
)
cust = data.frame(
CustomerId = c(1, 1, 4, 6),
Year = c(2001L, 2002L, 2002L, 2002L),
State = state.name[1:4]
)
sales
# CustomerId Year Product
# 1 2000 Toaster
# 1 2001 Toaster
# 1 2002 Toaster
# 3 2003 Radio
# 4 2004 Radio
# 6 2005 Radio
cust
# CustomerId Year State
# 1 2001 Alabama
# 1 2002 Alaska
# 4 2002 Arizona
# 6 2002 Arkansas
假设我们要从cust
将客户的状态添加到购买表sales
中,而忽略年份列。使用基数R,我们可以识别匹配的行,然后将值复制到以下位置:
sales$State <- cust$State[ match(sales$CustomerId, cust$CustomerId) ]
# CustomerId Year Product State
# 1 2000 Toaster Alabama
# 1 2001 Toaster Alabama
# 1 2002 Toaster Alabama
# 3 2003 Radio <NA>
# 4 2004 Radio Arizona
# 6 2005 Radio Arkansas
# cleanup for the next example
sales$State <- NULL
如此处所示,match
从客户表中选择第一条匹配行。
更新具有多个列的联接。当我们仅联接单个列并且对第一个匹配感到满意时,上述方法效果很好。假设我们希望客户表中的度量年份与销售年份相匹配。
正如@bgoldst的答案所述,在这种情况下,match
与interaction
可能是一种选择。更直接地说,可以使用data.table:
library(data.table)
setDT(sales); setDT(cust)
sales[, State := cust[sales, on=.(CustomerId, Year), x.State]]
# CustomerId Year Product State
# 1: 1 2000 Toaster <NA>
# 2: 1 2001 Toaster Alabama
# 3: 1 2002 Toaster Alaska
# 4: 3 2003 Radio <NA>
# 5: 4 2004 Radio <NA>
# 6: 6 2005 Radio <NA>
# cleanup for next example
sales[, State := NULL]
滚动更新加入。或者,我们可能希望采用在以下位置找到客户的最新状态:
sales[, State := cust[sales, on=.(CustomerId, Year), roll=TRUE, x.State]]
# CustomerId Year Product State
# 1: 1 2000 Toaster <NA>
# 2: 1 2001 Toaster Alabama
# 3: 1 2002 Toaster Alaska
# 4: 3 2003 Radio <NA>
# 5: 4 2004 Radio Arizona
# 6: 6 2005 Radio Arkansas
以上三个示例均着重于创建/添加新列。有关更新/修改现有列的示例,请参见相关的R FAQ 。