Extract hive table partition in Spark - java





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ height:90px;width:728px;box-sizing:border-box;
}







2















Is there any way in Spark to extract only partition column names?
The workaround I am using is to run "show extended table like table_name" using HiveContext










share|improve this question

























  • HiveMetaStoreClient should be useful for you

    – Ram Ghadiyaram
    Oct 1 '16 at 15:07


















2















Is there any way in Spark to extract only partition column names?
The workaround I am using is to run "show extended table like table_name" using HiveContext










share|improve this question

























  • HiveMetaStoreClient should be useful for you

    – Ram Ghadiyaram
    Oct 1 '16 at 15:07














2












2








2








Is there any way in Spark to extract only partition column names?
The workaround I am using is to run "show extended table like table_name" using HiveContext










share|improve this question
















Is there any way in Spark to extract only partition column names?
The workaround I am using is to run "show extended table like table_name" using HiveContext







apache-spark hive






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 22 '18 at 10:26









mrsrinivas

16.8k77796




16.8k77796










asked Oct 1 '16 at 13:18









user2895589user2895589

380622




380622













  • HiveMetaStoreClient should be useful for you

    – Ram Ghadiyaram
    Oct 1 '16 at 15:07



















  • HiveMetaStoreClient should be useful for you

    – Ram Ghadiyaram
    Oct 1 '16 at 15:07

















HiveMetaStoreClient should be useful for you

– Ram Ghadiyaram
Oct 1 '16 at 15:07





HiveMetaStoreClient should be useful for you

– Ram Ghadiyaram
Oct 1 '16 at 15:07












1 Answer
1






active

oldest

votes


















7














You can use class HiveMetaStoreClient to query directly from HiveMetaStore.



This class is widely used by popular APIS also, for interacting with
HiveMetaStore for ex: Apache Drill




org.apache.hadoop.hive.metastore.api.Partition getPartition(String
db_name, String tbl_name, List part_vals)



org.apache.hadoop.hive.metastore.api.Partition getPartition(String db, String tableName, String partName)
Map> getPartitionColumnStatistics(String
dbName, String tableName, List partNames, List
colNames)



Get partitions column statistics given dbName, tableName, multiple partitions and colName-s



List getPartitionsByNames(String
db_name, String tbl_name, List part_names)
Get partitions by a list of partition names.




Also, there are list methods as well..




List listPartitionNames(String db_name, String tbl_name,
List part_vals, short max_parts)



List listPartitionNames(String dbName, String tblName, short
max)



List listPartitions(String
db_name, String tbl_name, List part_vals, short max_parts)



List listPartitions(String
db_name, String tbl_name, short max_parts)




Sample Code snippet 1 :



import org.apache.hadoop.hive.conf.HiveConf;

// test program
public class Test {
public static void main(String args){

HiveConf hiveConf = new HiveConf();
hiveConf.setIntVar(HiveConf.ConfVars.METASTORETHRIFTCONNECTIONRETRIES, 3);
hiveConf.setVar(HiveConf.ConfVars.METASTOREURIS, "thrift://host:port");

HiveMetaStoreConnector hiveMetaStoreConnector = new HiveMetaStoreConnector(hiveConf);
if(hiveMetaStoreConnector != null){
System.out.print(hiveMetaStoreConnector.getAllPartitionInfo("tablename"));
}
}
}


// define a class like this

import com.google.common.base.Joiner;
import com.google.common.collect.Lists;
import org.apache.hadoop.hive.conf.HiveConf;
import org.apache.hadoop.hive.metastore.HiveMetaStoreClient;
import org.apache.hadoop.hive.metastore.api.FieldSchema;
import org.apache.hadoop.hive.metastore.api.MetaException;
import org.apache.hadoop.hive.metastore.api.Partition;
import org.apache.hadoop.hive.metastore.api.hive_metastoreConstants;
import org.apache.hadoop.hive.ql.metadata.Hive;
import org.apache.thrift.TException;
import org.joda.time.DateTime;
import org.joda.time.format.DateTimeFormatter;

import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;

public class HiveMetaStoreConnector {
private HiveConf hiveConf;
HiveMetaStoreClient hiveMetaStoreClient;

public HiveMetaStoreConnector(String msAddr, String msPort){
try {
hiveConf = new HiveConf();
hiveConf.setVar(HiveConf.ConfVars.METASTOREURIS, msAddr+":"+ msPort);
hiveMetaStoreClient = new HiveMetaStoreClient(hiveConf);
} catch (MetaException e) {
e.printStackTrace();
System.err.println("Constructor error");
System.err.println(e.toString());
System.exit(-100);
}
}

public HiveMetaStoreConnector(HiveConf hiveConf){
try {
this.hiveConf = hiveConf;
hiveMetaStoreClient = new HiveMetaStoreClient(hiveConf);
} catch (MetaException e) {
e.printStackTrace();
System.err.println("Constructor error");
System.err.println(e.toString());
System.exit(-100);
}
}

public String getAllPartitionInfo(String dbName){
List<String> res = Lists.newArrayList();
try {
List<String> tableList = hiveMetaStoreClient.getAllTables(dbName);
for(String tableName:tableList){
res.addAll(getTablePartitionInformation(dbName,tableName));
}
} catch (MetaException e) {
e.printStackTrace();
System.out.println("getAllTableStatistic error");
System.out.println(e.toString());
System.exit(-100);
}

return Joiner.on("n").join(res);
}

public List<String> getTablePartitionInformation(String dbName, String tableName){
List<String> partitionsInfo = Lists.newArrayList();
try {
List<String> partitionNames = hiveMetaStoreClient.listPartitionNames(dbName,tableName, (short) 10000);
List<Partition> partitions = hiveMetaStoreClient.listPartitions(dbName,tableName, (short) 10000);
for(Partition partition:partitions){
StringBuffer sb = new StringBuffer();
sb.append(tableName);
sb.append("t");
List<String> partitionValues = partition.getValues();
if(partitionValues.size()<4){
int size = partitionValues.size();
for(int j=0; j<4-size;j++){
partitionValues.add("null");
}
}
sb.append(Joiner.on("t").join(partitionValues));
sb.append("t");
DateTime createDate = new DateTime((long)partition.getCreateTime()*1000);
sb.append(createDate.toString("yyyy-MM-dd HH:mm:ss"));
partitionsInfo.add(sb.toString());
}

} catch (TException e) {
e.printStackTrace();
return Arrays.asList(new String{"error for request on" + tableName});
}

return partitionsInfo;
}

public String getAllTableStatistic(String dbName){
List<String> res = Lists.newArrayList();
try {
List<String> tableList = hiveMetaStoreClient.getAllTables(dbName);
for(String tableName:tableList){
res.addAll(getTableColumnsInformation(dbName,tableName));
}
} catch (MetaException e) {
e.printStackTrace();
System.out.println("getAllTableStatistic error");
System.out.println(e.toString());
System.exit(-100);
}

return Joiner.on("n").join(res);
}

public List<String> getTableColumnsInformation(String dbName, String tableName){
try {
List<FieldSchema> fields = hiveMetaStoreClient.getFields(dbName, tableName);
List<String> infs = Lists.newArrayList();
int cnt = 0;
for(FieldSchema fs : fields){
StringBuffer sb = new StringBuffer();
sb.append(tableName);
sb.append("t");
sb.append(cnt);
sb.append("t");
cnt++;
sb.append(fs.getName());
sb.append("t");
sb.append(fs.getType());
sb.append("t");
sb.append(fs.getComment());
infs.add(sb.toString());
}

return infs;

} catch (TException e) {
e.printStackTrace();
System.out.println("getTableColumnsInformation error");
System.out.println(e.toString());
System.exit(-100);
return null;
}
}
}


Sample code snippet example 2 (source)



import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hive.conf.HiveConf;
import org.apache.hadoop.hive.metastore.HiveMetaStoreClient;
import org.apache.hadoop.hive.metastore.api.Database;
import org.apache.hadoop.hive.metastore.api.MetaException;
import org.apache.hadoop.hive.metastore.api.NoSuchObjectException;
import org.apache.hive.hcatalog.common.HCatUtil;
import org.apache.thrift.TException;

import javax.xml.crypto.Data;
import java.io.IOException;
import java.util.HashMap;

public class HiveMetaStoreClientTest {
public static void main(String args) {

HiveConf hiveConf = null;
HiveMetaStoreClient hiveMetaStoreClient = null;
String dbName = null;

try {
hiveConf = HCatUtil.getHiveConf(new Configuration());
hiveMetaStoreClient = new HiveMetaStoreClient(hiveConf);

dbName = args[0];

getDatabase(hiveMetaStoreClient, dbName);


} catch (MetaException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} catch (NoSuchObjectException e) {
e.printStackTrace();
System.out.println("===============");
System.out.println("database " + args[0] + "not exists");
System.out.println("===============");
createDatabase(hiveMetaStoreClient, dbName);
try {
getDatabase(hiveMetaStoreClient, dbName);
} catch (TException e1) {
e1.printStackTrace();
System.out.println("TMD");
}
} catch (TException e) {
e.printStackTrace();
}
}

public static Database getDatabase(HiveMetaStoreClient hiveMetaStoreClient, String dbName) throws TException {
Database database = null;

database = hiveMetaStoreClient.getDatabase(dbName);

System.out.println(database.getLocationUri());
System.out.println(database.getOwnerName());

for (String key : database.getParameters().keySet()) {
System.out.println(key + " = " + database.getParameters().get(key));
}
return database;
}

public static Database createDatabase(HiveMetaStoreClient hiveMetaStoreClient, String dbName) {
HashMap<String, String> map = new HashMap<String,String>();
Database database = new Database(dbName, "desc", null, map);
try {
hiveMetaStoreClient.createDatabase(database);
} catch (TException e) {
e.printStackTrace();
System.out.println("some error");
}
return database;
}
}





share|improve this answer





















  • 1





    Thanks @RamPrasad G very clear example.

    – user2895589
    Oct 1 '16 at 19:29











  • I'm confused about part_vals in getPartition, if I have a location /user/hive/warehouse/db1.db/table1/time=20170616 and I want to get this partition, what is the content of part_vals arg ?

    – Gary Gauh
    Jun 17 '17 at 8:53











  • @GaryGauh: time is partition column name and value is 20170616

    – Ram Ghadiyaram
    Jun 17 '17 at 11:14












Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f39807098%2fextract-hive-table-partition-in-spark-java%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









7














You can use class HiveMetaStoreClient to query directly from HiveMetaStore.



This class is widely used by popular APIS also, for interacting with
HiveMetaStore for ex: Apache Drill




org.apache.hadoop.hive.metastore.api.Partition getPartition(String
db_name, String tbl_name, List part_vals)



org.apache.hadoop.hive.metastore.api.Partition getPartition(String db, String tableName, String partName)
Map> getPartitionColumnStatistics(String
dbName, String tableName, List partNames, List
colNames)



Get partitions column statistics given dbName, tableName, multiple partitions and colName-s



List getPartitionsByNames(String
db_name, String tbl_name, List part_names)
Get partitions by a list of partition names.




Also, there are list methods as well..




List listPartitionNames(String db_name, String tbl_name,
List part_vals, short max_parts)



List listPartitionNames(String dbName, String tblName, short
max)



List listPartitions(String
db_name, String tbl_name, List part_vals, short max_parts)



List listPartitions(String
db_name, String tbl_name, short max_parts)




Sample Code snippet 1 :



import org.apache.hadoop.hive.conf.HiveConf;

// test program
public class Test {
public static void main(String args){

HiveConf hiveConf = new HiveConf();
hiveConf.setIntVar(HiveConf.ConfVars.METASTORETHRIFTCONNECTIONRETRIES, 3);
hiveConf.setVar(HiveConf.ConfVars.METASTOREURIS, "thrift://host:port");

HiveMetaStoreConnector hiveMetaStoreConnector = new HiveMetaStoreConnector(hiveConf);
if(hiveMetaStoreConnector != null){
System.out.print(hiveMetaStoreConnector.getAllPartitionInfo("tablename"));
}
}
}


// define a class like this

import com.google.common.base.Joiner;
import com.google.common.collect.Lists;
import org.apache.hadoop.hive.conf.HiveConf;
import org.apache.hadoop.hive.metastore.HiveMetaStoreClient;
import org.apache.hadoop.hive.metastore.api.FieldSchema;
import org.apache.hadoop.hive.metastore.api.MetaException;
import org.apache.hadoop.hive.metastore.api.Partition;
import org.apache.hadoop.hive.metastore.api.hive_metastoreConstants;
import org.apache.hadoop.hive.ql.metadata.Hive;
import org.apache.thrift.TException;
import org.joda.time.DateTime;
import org.joda.time.format.DateTimeFormatter;

import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;

public class HiveMetaStoreConnector {
private HiveConf hiveConf;
HiveMetaStoreClient hiveMetaStoreClient;

public HiveMetaStoreConnector(String msAddr, String msPort){
try {
hiveConf = new HiveConf();
hiveConf.setVar(HiveConf.ConfVars.METASTOREURIS, msAddr+":"+ msPort);
hiveMetaStoreClient = new HiveMetaStoreClient(hiveConf);
} catch (MetaException e) {
e.printStackTrace();
System.err.println("Constructor error");
System.err.println(e.toString());
System.exit(-100);
}
}

public HiveMetaStoreConnector(HiveConf hiveConf){
try {
this.hiveConf = hiveConf;
hiveMetaStoreClient = new HiveMetaStoreClient(hiveConf);
} catch (MetaException e) {
e.printStackTrace();
System.err.println("Constructor error");
System.err.println(e.toString());
System.exit(-100);
}
}

public String getAllPartitionInfo(String dbName){
List<String> res = Lists.newArrayList();
try {
List<String> tableList = hiveMetaStoreClient.getAllTables(dbName);
for(String tableName:tableList){
res.addAll(getTablePartitionInformation(dbName,tableName));
}
} catch (MetaException e) {
e.printStackTrace();
System.out.println("getAllTableStatistic error");
System.out.println(e.toString());
System.exit(-100);
}

return Joiner.on("n").join(res);
}

public List<String> getTablePartitionInformation(String dbName, String tableName){
List<String> partitionsInfo = Lists.newArrayList();
try {
List<String> partitionNames = hiveMetaStoreClient.listPartitionNames(dbName,tableName, (short) 10000);
List<Partition> partitions = hiveMetaStoreClient.listPartitions(dbName,tableName, (short) 10000);
for(Partition partition:partitions){
StringBuffer sb = new StringBuffer();
sb.append(tableName);
sb.append("t");
List<String> partitionValues = partition.getValues();
if(partitionValues.size()<4){
int size = partitionValues.size();
for(int j=0; j<4-size;j++){
partitionValues.add("null");
}
}
sb.append(Joiner.on("t").join(partitionValues));
sb.append("t");
DateTime createDate = new DateTime((long)partition.getCreateTime()*1000);
sb.append(createDate.toString("yyyy-MM-dd HH:mm:ss"));
partitionsInfo.add(sb.toString());
}

} catch (TException e) {
e.printStackTrace();
return Arrays.asList(new String{"error for request on" + tableName});
}

return partitionsInfo;
}

public String getAllTableStatistic(String dbName){
List<String> res = Lists.newArrayList();
try {
List<String> tableList = hiveMetaStoreClient.getAllTables(dbName);
for(String tableName:tableList){
res.addAll(getTableColumnsInformation(dbName,tableName));
}
} catch (MetaException e) {
e.printStackTrace();
System.out.println("getAllTableStatistic error");
System.out.println(e.toString());
System.exit(-100);
}

return Joiner.on("n").join(res);
}

public List<String> getTableColumnsInformation(String dbName, String tableName){
try {
List<FieldSchema> fields = hiveMetaStoreClient.getFields(dbName, tableName);
List<String> infs = Lists.newArrayList();
int cnt = 0;
for(FieldSchema fs : fields){
StringBuffer sb = new StringBuffer();
sb.append(tableName);
sb.append("t");
sb.append(cnt);
sb.append("t");
cnt++;
sb.append(fs.getName());
sb.append("t");
sb.append(fs.getType());
sb.append("t");
sb.append(fs.getComment());
infs.add(sb.toString());
}

return infs;

} catch (TException e) {
e.printStackTrace();
System.out.println("getTableColumnsInformation error");
System.out.println(e.toString());
System.exit(-100);
return null;
}
}
}


Sample code snippet example 2 (source)



import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hive.conf.HiveConf;
import org.apache.hadoop.hive.metastore.HiveMetaStoreClient;
import org.apache.hadoop.hive.metastore.api.Database;
import org.apache.hadoop.hive.metastore.api.MetaException;
import org.apache.hadoop.hive.metastore.api.NoSuchObjectException;
import org.apache.hive.hcatalog.common.HCatUtil;
import org.apache.thrift.TException;

import javax.xml.crypto.Data;
import java.io.IOException;
import java.util.HashMap;

public class HiveMetaStoreClientTest {
public static void main(String args) {

HiveConf hiveConf = null;
HiveMetaStoreClient hiveMetaStoreClient = null;
String dbName = null;

try {
hiveConf = HCatUtil.getHiveConf(new Configuration());
hiveMetaStoreClient = new HiveMetaStoreClient(hiveConf);

dbName = args[0];

getDatabase(hiveMetaStoreClient, dbName);


} catch (MetaException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} catch (NoSuchObjectException e) {
e.printStackTrace();
System.out.println("===============");
System.out.println("database " + args[0] + "not exists");
System.out.println("===============");
createDatabase(hiveMetaStoreClient, dbName);
try {
getDatabase(hiveMetaStoreClient, dbName);
} catch (TException e1) {
e1.printStackTrace();
System.out.println("TMD");
}
} catch (TException e) {
e.printStackTrace();
}
}

public static Database getDatabase(HiveMetaStoreClient hiveMetaStoreClient, String dbName) throws TException {
Database database = null;

database = hiveMetaStoreClient.getDatabase(dbName);

System.out.println(database.getLocationUri());
System.out.println(database.getOwnerName());

for (String key : database.getParameters().keySet()) {
System.out.println(key + " = " + database.getParameters().get(key));
}
return database;
}

public static Database createDatabase(HiveMetaStoreClient hiveMetaStoreClient, String dbName) {
HashMap<String, String> map = new HashMap<String,String>();
Database database = new Database(dbName, "desc", null, map);
try {
hiveMetaStoreClient.createDatabase(database);
} catch (TException e) {
e.printStackTrace();
System.out.println("some error");
}
return database;
}
}





share|improve this answer





















  • 1





    Thanks @RamPrasad G very clear example.

    – user2895589
    Oct 1 '16 at 19:29











  • I'm confused about part_vals in getPartition, if I have a location /user/hive/warehouse/db1.db/table1/time=20170616 and I want to get this partition, what is the content of part_vals arg ?

    – Gary Gauh
    Jun 17 '17 at 8:53











  • @GaryGauh: time is partition column name and value is 20170616

    – Ram Ghadiyaram
    Jun 17 '17 at 11:14
















7














You can use class HiveMetaStoreClient to query directly from HiveMetaStore.



This class is widely used by popular APIS also, for interacting with
HiveMetaStore for ex: Apache Drill




org.apache.hadoop.hive.metastore.api.Partition getPartition(String
db_name, String tbl_name, List part_vals)



org.apache.hadoop.hive.metastore.api.Partition getPartition(String db, String tableName, String partName)
Map> getPartitionColumnStatistics(String
dbName, String tableName, List partNames, List
colNames)



Get partitions column statistics given dbName, tableName, multiple partitions and colName-s



List getPartitionsByNames(String
db_name, String tbl_name, List part_names)
Get partitions by a list of partition names.




Also, there are list methods as well..




List listPartitionNames(String db_name, String tbl_name,
List part_vals, short max_parts)



List listPartitionNames(String dbName, String tblName, short
max)



List listPartitions(String
db_name, String tbl_name, List part_vals, short max_parts)



List listPartitions(String
db_name, String tbl_name, short max_parts)




Sample Code snippet 1 :



import org.apache.hadoop.hive.conf.HiveConf;

// test program
public class Test {
public static void main(String args){

HiveConf hiveConf = new HiveConf();
hiveConf.setIntVar(HiveConf.ConfVars.METASTORETHRIFTCONNECTIONRETRIES, 3);
hiveConf.setVar(HiveConf.ConfVars.METASTOREURIS, "thrift://host:port");

HiveMetaStoreConnector hiveMetaStoreConnector = new HiveMetaStoreConnector(hiveConf);
if(hiveMetaStoreConnector != null){
System.out.print(hiveMetaStoreConnector.getAllPartitionInfo("tablename"));
}
}
}


// define a class like this

import com.google.common.base.Joiner;
import com.google.common.collect.Lists;
import org.apache.hadoop.hive.conf.HiveConf;
import org.apache.hadoop.hive.metastore.HiveMetaStoreClient;
import org.apache.hadoop.hive.metastore.api.FieldSchema;
import org.apache.hadoop.hive.metastore.api.MetaException;
import org.apache.hadoop.hive.metastore.api.Partition;
import org.apache.hadoop.hive.metastore.api.hive_metastoreConstants;
import org.apache.hadoop.hive.ql.metadata.Hive;
import org.apache.thrift.TException;
import org.joda.time.DateTime;
import org.joda.time.format.DateTimeFormatter;

import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;

public class HiveMetaStoreConnector {
private HiveConf hiveConf;
HiveMetaStoreClient hiveMetaStoreClient;

public HiveMetaStoreConnector(String msAddr, String msPort){
try {
hiveConf = new HiveConf();
hiveConf.setVar(HiveConf.ConfVars.METASTOREURIS, msAddr+":"+ msPort);
hiveMetaStoreClient = new HiveMetaStoreClient(hiveConf);
} catch (MetaException e) {
e.printStackTrace();
System.err.println("Constructor error");
System.err.println(e.toString());
System.exit(-100);
}
}

public HiveMetaStoreConnector(HiveConf hiveConf){
try {
this.hiveConf = hiveConf;
hiveMetaStoreClient = new HiveMetaStoreClient(hiveConf);
} catch (MetaException e) {
e.printStackTrace();
System.err.println("Constructor error");
System.err.println(e.toString());
System.exit(-100);
}
}

public String getAllPartitionInfo(String dbName){
List<String> res = Lists.newArrayList();
try {
List<String> tableList = hiveMetaStoreClient.getAllTables(dbName);
for(String tableName:tableList){
res.addAll(getTablePartitionInformation(dbName,tableName));
}
} catch (MetaException e) {
e.printStackTrace();
System.out.println("getAllTableStatistic error");
System.out.println(e.toString());
System.exit(-100);
}

return Joiner.on("n").join(res);
}

public List<String> getTablePartitionInformation(String dbName, String tableName){
List<String> partitionsInfo = Lists.newArrayList();
try {
List<String> partitionNames = hiveMetaStoreClient.listPartitionNames(dbName,tableName, (short) 10000);
List<Partition> partitions = hiveMetaStoreClient.listPartitions(dbName,tableName, (short) 10000);
for(Partition partition:partitions){
StringBuffer sb = new StringBuffer();
sb.append(tableName);
sb.append("t");
List<String> partitionValues = partition.getValues();
if(partitionValues.size()<4){
int size = partitionValues.size();
for(int j=0; j<4-size;j++){
partitionValues.add("null");
}
}
sb.append(Joiner.on("t").join(partitionValues));
sb.append("t");
DateTime createDate = new DateTime((long)partition.getCreateTime()*1000);
sb.append(createDate.toString("yyyy-MM-dd HH:mm:ss"));
partitionsInfo.add(sb.toString());
}

} catch (TException e) {
e.printStackTrace();
return Arrays.asList(new String{"error for request on" + tableName});
}

return partitionsInfo;
}

public String getAllTableStatistic(String dbName){
List<String> res = Lists.newArrayList();
try {
List<String> tableList = hiveMetaStoreClient.getAllTables(dbName);
for(String tableName:tableList){
res.addAll(getTableColumnsInformation(dbName,tableName));
}
} catch (MetaException e) {
e.printStackTrace();
System.out.println("getAllTableStatistic error");
System.out.println(e.toString());
System.exit(-100);
}

return Joiner.on("n").join(res);
}

public List<String> getTableColumnsInformation(String dbName, String tableName){
try {
List<FieldSchema> fields = hiveMetaStoreClient.getFields(dbName, tableName);
List<String> infs = Lists.newArrayList();
int cnt = 0;
for(FieldSchema fs : fields){
StringBuffer sb = new StringBuffer();
sb.append(tableName);
sb.append("t");
sb.append(cnt);
sb.append("t");
cnt++;
sb.append(fs.getName());
sb.append("t");
sb.append(fs.getType());
sb.append("t");
sb.append(fs.getComment());
infs.add(sb.toString());
}

return infs;

} catch (TException e) {
e.printStackTrace();
System.out.println("getTableColumnsInformation error");
System.out.println(e.toString());
System.exit(-100);
return null;
}
}
}


Sample code snippet example 2 (source)



import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hive.conf.HiveConf;
import org.apache.hadoop.hive.metastore.HiveMetaStoreClient;
import org.apache.hadoop.hive.metastore.api.Database;
import org.apache.hadoop.hive.metastore.api.MetaException;
import org.apache.hadoop.hive.metastore.api.NoSuchObjectException;
import org.apache.hive.hcatalog.common.HCatUtil;
import org.apache.thrift.TException;

import javax.xml.crypto.Data;
import java.io.IOException;
import java.util.HashMap;

public class HiveMetaStoreClientTest {
public static void main(String args) {

HiveConf hiveConf = null;
HiveMetaStoreClient hiveMetaStoreClient = null;
String dbName = null;

try {
hiveConf = HCatUtil.getHiveConf(new Configuration());
hiveMetaStoreClient = new HiveMetaStoreClient(hiveConf);

dbName = args[0];

getDatabase(hiveMetaStoreClient, dbName);


} catch (MetaException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} catch (NoSuchObjectException e) {
e.printStackTrace();
System.out.println("===============");
System.out.println("database " + args[0] + "not exists");
System.out.println("===============");
createDatabase(hiveMetaStoreClient, dbName);
try {
getDatabase(hiveMetaStoreClient, dbName);
} catch (TException e1) {
e1.printStackTrace();
System.out.println("TMD");
}
} catch (TException e) {
e.printStackTrace();
}
}

public static Database getDatabase(HiveMetaStoreClient hiveMetaStoreClient, String dbName) throws TException {
Database database = null;

database = hiveMetaStoreClient.getDatabase(dbName);

System.out.println(database.getLocationUri());
System.out.println(database.getOwnerName());

for (String key : database.getParameters().keySet()) {
System.out.println(key + " = " + database.getParameters().get(key));
}
return database;
}

public static Database createDatabase(HiveMetaStoreClient hiveMetaStoreClient, String dbName) {
HashMap<String, String> map = new HashMap<String,String>();
Database database = new Database(dbName, "desc", null, map);
try {
hiveMetaStoreClient.createDatabase(database);
} catch (TException e) {
e.printStackTrace();
System.out.println("some error");
}
return database;
}
}





share|improve this answer





















  • 1





    Thanks @RamPrasad G very clear example.

    – user2895589
    Oct 1 '16 at 19:29











  • I'm confused about part_vals in getPartition, if I have a location /user/hive/warehouse/db1.db/table1/time=20170616 and I want to get this partition, what is the content of part_vals arg ?

    – Gary Gauh
    Jun 17 '17 at 8:53











  • @GaryGauh: time is partition column name and value is 20170616

    – Ram Ghadiyaram
    Jun 17 '17 at 11:14














7












7








7







You can use class HiveMetaStoreClient to query directly from HiveMetaStore.



This class is widely used by popular APIS also, for interacting with
HiveMetaStore for ex: Apache Drill




org.apache.hadoop.hive.metastore.api.Partition getPartition(String
db_name, String tbl_name, List part_vals)



org.apache.hadoop.hive.metastore.api.Partition getPartition(String db, String tableName, String partName)
Map> getPartitionColumnStatistics(String
dbName, String tableName, List partNames, List
colNames)



Get partitions column statistics given dbName, tableName, multiple partitions and colName-s



List getPartitionsByNames(String
db_name, String tbl_name, List part_names)
Get partitions by a list of partition names.




Also, there are list methods as well..




List listPartitionNames(String db_name, String tbl_name,
List part_vals, short max_parts)



List listPartitionNames(String dbName, String tblName, short
max)



List listPartitions(String
db_name, String tbl_name, List part_vals, short max_parts)



List listPartitions(String
db_name, String tbl_name, short max_parts)




Sample Code snippet 1 :



import org.apache.hadoop.hive.conf.HiveConf;

// test program
public class Test {
public static void main(String args){

HiveConf hiveConf = new HiveConf();
hiveConf.setIntVar(HiveConf.ConfVars.METASTORETHRIFTCONNECTIONRETRIES, 3);
hiveConf.setVar(HiveConf.ConfVars.METASTOREURIS, "thrift://host:port");

HiveMetaStoreConnector hiveMetaStoreConnector = new HiveMetaStoreConnector(hiveConf);
if(hiveMetaStoreConnector != null){
System.out.print(hiveMetaStoreConnector.getAllPartitionInfo("tablename"));
}
}
}


// define a class like this

import com.google.common.base.Joiner;
import com.google.common.collect.Lists;
import org.apache.hadoop.hive.conf.HiveConf;
import org.apache.hadoop.hive.metastore.HiveMetaStoreClient;
import org.apache.hadoop.hive.metastore.api.FieldSchema;
import org.apache.hadoop.hive.metastore.api.MetaException;
import org.apache.hadoop.hive.metastore.api.Partition;
import org.apache.hadoop.hive.metastore.api.hive_metastoreConstants;
import org.apache.hadoop.hive.ql.metadata.Hive;
import org.apache.thrift.TException;
import org.joda.time.DateTime;
import org.joda.time.format.DateTimeFormatter;

import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;

public class HiveMetaStoreConnector {
private HiveConf hiveConf;
HiveMetaStoreClient hiveMetaStoreClient;

public HiveMetaStoreConnector(String msAddr, String msPort){
try {
hiveConf = new HiveConf();
hiveConf.setVar(HiveConf.ConfVars.METASTOREURIS, msAddr+":"+ msPort);
hiveMetaStoreClient = new HiveMetaStoreClient(hiveConf);
} catch (MetaException e) {
e.printStackTrace();
System.err.println("Constructor error");
System.err.println(e.toString());
System.exit(-100);
}
}

public HiveMetaStoreConnector(HiveConf hiveConf){
try {
this.hiveConf = hiveConf;
hiveMetaStoreClient = new HiveMetaStoreClient(hiveConf);
} catch (MetaException e) {
e.printStackTrace();
System.err.println("Constructor error");
System.err.println(e.toString());
System.exit(-100);
}
}

public String getAllPartitionInfo(String dbName){
List<String> res = Lists.newArrayList();
try {
List<String> tableList = hiveMetaStoreClient.getAllTables(dbName);
for(String tableName:tableList){
res.addAll(getTablePartitionInformation(dbName,tableName));
}
} catch (MetaException e) {
e.printStackTrace();
System.out.println("getAllTableStatistic error");
System.out.println(e.toString());
System.exit(-100);
}

return Joiner.on("n").join(res);
}

public List<String> getTablePartitionInformation(String dbName, String tableName){
List<String> partitionsInfo = Lists.newArrayList();
try {
List<String> partitionNames = hiveMetaStoreClient.listPartitionNames(dbName,tableName, (short) 10000);
List<Partition> partitions = hiveMetaStoreClient.listPartitions(dbName,tableName, (short) 10000);
for(Partition partition:partitions){
StringBuffer sb = new StringBuffer();
sb.append(tableName);
sb.append("t");
List<String> partitionValues = partition.getValues();
if(partitionValues.size()<4){
int size = partitionValues.size();
for(int j=0; j<4-size;j++){
partitionValues.add("null");
}
}
sb.append(Joiner.on("t").join(partitionValues));
sb.append("t");
DateTime createDate = new DateTime((long)partition.getCreateTime()*1000);
sb.append(createDate.toString("yyyy-MM-dd HH:mm:ss"));
partitionsInfo.add(sb.toString());
}

} catch (TException e) {
e.printStackTrace();
return Arrays.asList(new String{"error for request on" + tableName});
}

return partitionsInfo;
}

public String getAllTableStatistic(String dbName){
List<String> res = Lists.newArrayList();
try {
List<String> tableList = hiveMetaStoreClient.getAllTables(dbName);
for(String tableName:tableList){
res.addAll(getTableColumnsInformation(dbName,tableName));
}
} catch (MetaException e) {
e.printStackTrace();
System.out.println("getAllTableStatistic error");
System.out.println(e.toString());
System.exit(-100);
}

return Joiner.on("n").join(res);
}

public List<String> getTableColumnsInformation(String dbName, String tableName){
try {
List<FieldSchema> fields = hiveMetaStoreClient.getFields(dbName, tableName);
List<String> infs = Lists.newArrayList();
int cnt = 0;
for(FieldSchema fs : fields){
StringBuffer sb = new StringBuffer();
sb.append(tableName);
sb.append("t");
sb.append(cnt);
sb.append("t");
cnt++;
sb.append(fs.getName());
sb.append("t");
sb.append(fs.getType());
sb.append("t");
sb.append(fs.getComment());
infs.add(sb.toString());
}

return infs;

} catch (TException e) {
e.printStackTrace();
System.out.println("getTableColumnsInformation error");
System.out.println(e.toString());
System.exit(-100);
return null;
}
}
}


Sample code snippet example 2 (source)



import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hive.conf.HiveConf;
import org.apache.hadoop.hive.metastore.HiveMetaStoreClient;
import org.apache.hadoop.hive.metastore.api.Database;
import org.apache.hadoop.hive.metastore.api.MetaException;
import org.apache.hadoop.hive.metastore.api.NoSuchObjectException;
import org.apache.hive.hcatalog.common.HCatUtil;
import org.apache.thrift.TException;

import javax.xml.crypto.Data;
import java.io.IOException;
import java.util.HashMap;

public class HiveMetaStoreClientTest {
public static void main(String args) {

HiveConf hiveConf = null;
HiveMetaStoreClient hiveMetaStoreClient = null;
String dbName = null;

try {
hiveConf = HCatUtil.getHiveConf(new Configuration());
hiveMetaStoreClient = new HiveMetaStoreClient(hiveConf);

dbName = args[0];

getDatabase(hiveMetaStoreClient, dbName);


} catch (MetaException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} catch (NoSuchObjectException e) {
e.printStackTrace();
System.out.println("===============");
System.out.println("database " + args[0] + "not exists");
System.out.println("===============");
createDatabase(hiveMetaStoreClient, dbName);
try {
getDatabase(hiveMetaStoreClient, dbName);
} catch (TException e1) {
e1.printStackTrace();
System.out.println("TMD");
}
} catch (TException e) {
e.printStackTrace();
}
}

public static Database getDatabase(HiveMetaStoreClient hiveMetaStoreClient, String dbName) throws TException {
Database database = null;

database = hiveMetaStoreClient.getDatabase(dbName);

System.out.println(database.getLocationUri());
System.out.println(database.getOwnerName());

for (String key : database.getParameters().keySet()) {
System.out.println(key + " = " + database.getParameters().get(key));
}
return database;
}

public static Database createDatabase(HiveMetaStoreClient hiveMetaStoreClient, String dbName) {
HashMap<String, String> map = new HashMap<String,String>();
Database database = new Database(dbName, "desc", null, map);
try {
hiveMetaStoreClient.createDatabase(database);
} catch (TException e) {
e.printStackTrace();
System.out.println("some error");
}
return database;
}
}





share|improve this answer















You can use class HiveMetaStoreClient to query directly from HiveMetaStore.



This class is widely used by popular APIS also, for interacting with
HiveMetaStore for ex: Apache Drill




org.apache.hadoop.hive.metastore.api.Partition getPartition(String
db_name, String tbl_name, List part_vals)



org.apache.hadoop.hive.metastore.api.Partition getPartition(String db, String tableName, String partName)
Map> getPartitionColumnStatistics(String
dbName, String tableName, List partNames, List
colNames)



Get partitions column statistics given dbName, tableName, multiple partitions and colName-s



List getPartitionsByNames(String
db_name, String tbl_name, List part_names)
Get partitions by a list of partition names.




Also, there are list methods as well..




List listPartitionNames(String db_name, String tbl_name,
List part_vals, short max_parts)



List listPartitionNames(String dbName, String tblName, short
max)



List listPartitions(String
db_name, String tbl_name, List part_vals, short max_parts)



List listPartitions(String
db_name, String tbl_name, short max_parts)




Sample Code snippet 1 :



import org.apache.hadoop.hive.conf.HiveConf;

// test program
public class Test {
public static void main(String args){

HiveConf hiveConf = new HiveConf();
hiveConf.setIntVar(HiveConf.ConfVars.METASTORETHRIFTCONNECTIONRETRIES, 3);
hiveConf.setVar(HiveConf.ConfVars.METASTOREURIS, "thrift://host:port");

HiveMetaStoreConnector hiveMetaStoreConnector = new HiveMetaStoreConnector(hiveConf);
if(hiveMetaStoreConnector != null){
System.out.print(hiveMetaStoreConnector.getAllPartitionInfo("tablename"));
}
}
}


// define a class like this

import com.google.common.base.Joiner;
import com.google.common.collect.Lists;
import org.apache.hadoop.hive.conf.HiveConf;
import org.apache.hadoop.hive.metastore.HiveMetaStoreClient;
import org.apache.hadoop.hive.metastore.api.FieldSchema;
import org.apache.hadoop.hive.metastore.api.MetaException;
import org.apache.hadoop.hive.metastore.api.Partition;
import org.apache.hadoop.hive.metastore.api.hive_metastoreConstants;
import org.apache.hadoop.hive.ql.metadata.Hive;
import org.apache.thrift.TException;
import org.joda.time.DateTime;
import org.joda.time.format.DateTimeFormatter;

import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;

public class HiveMetaStoreConnector {
private HiveConf hiveConf;
HiveMetaStoreClient hiveMetaStoreClient;

public HiveMetaStoreConnector(String msAddr, String msPort){
try {
hiveConf = new HiveConf();
hiveConf.setVar(HiveConf.ConfVars.METASTOREURIS, msAddr+":"+ msPort);
hiveMetaStoreClient = new HiveMetaStoreClient(hiveConf);
} catch (MetaException e) {
e.printStackTrace();
System.err.println("Constructor error");
System.err.println(e.toString());
System.exit(-100);
}
}

public HiveMetaStoreConnector(HiveConf hiveConf){
try {
this.hiveConf = hiveConf;
hiveMetaStoreClient = new HiveMetaStoreClient(hiveConf);
} catch (MetaException e) {
e.printStackTrace();
System.err.println("Constructor error");
System.err.println(e.toString());
System.exit(-100);
}
}

public String getAllPartitionInfo(String dbName){
List<String> res = Lists.newArrayList();
try {
List<String> tableList = hiveMetaStoreClient.getAllTables(dbName);
for(String tableName:tableList){
res.addAll(getTablePartitionInformation(dbName,tableName));
}
} catch (MetaException e) {
e.printStackTrace();
System.out.println("getAllTableStatistic error");
System.out.println(e.toString());
System.exit(-100);
}

return Joiner.on("n").join(res);
}

public List<String> getTablePartitionInformation(String dbName, String tableName){
List<String> partitionsInfo = Lists.newArrayList();
try {
List<String> partitionNames = hiveMetaStoreClient.listPartitionNames(dbName,tableName, (short) 10000);
List<Partition> partitions = hiveMetaStoreClient.listPartitions(dbName,tableName, (short) 10000);
for(Partition partition:partitions){
StringBuffer sb = new StringBuffer();
sb.append(tableName);
sb.append("t");
List<String> partitionValues = partition.getValues();
if(partitionValues.size()<4){
int size = partitionValues.size();
for(int j=0; j<4-size;j++){
partitionValues.add("null");
}
}
sb.append(Joiner.on("t").join(partitionValues));
sb.append("t");
DateTime createDate = new DateTime((long)partition.getCreateTime()*1000);
sb.append(createDate.toString("yyyy-MM-dd HH:mm:ss"));
partitionsInfo.add(sb.toString());
}

} catch (TException e) {
e.printStackTrace();
return Arrays.asList(new String{"error for request on" + tableName});
}

return partitionsInfo;
}

public String getAllTableStatistic(String dbName){
List<String> res = Lists.newArrayList();
try {
List<String> tableList = hiveMetaStoreClient.getAllTables(dbName);
for(String tableName:tableList){
res.addAll(getTableColumnsInformation(dbName,tableName));
}
} catch (MetaException e) {
e.printStackTrace();
System.out.println("getAllTableStatistic error");
System.out.println(e.toString());
System.exit(-100);
}

return Joiner.on("n").join(res);
}

public List<String> getTableColumnsInformation(String dbName, String tableName){
try {
List<FieldSchema> fields = hiveMetaStoreClient.getFields(dbName, tableName);
List<String> infs = Lists.newArrayList();
int cnt = 0;
for(FieldSchema fs : fields){
StringBuffer sb = new StringBuffer();
sb.append(tableName);
sb.append("t");
sb.append(cnt);
sb.append("t");
cnt++;
sb.append(fs.getName());
sb.append("t");
sb.append(fs.getType());
sb.append("t");
sb.append(fs.getComment());
infs.add(sb.toString());
}

return infs;

} catch (TException e) {
e.printStackTrace();
System.out.println("getTableColumnsInformation error");
System.out.println(e.toString());
System.exit(-100);
return null;
}
}
}


Sample code snippet example 2 (source)



import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hive.conf.HiveConf;
import org.apache.hadoop.hive.metastore.HiveMetaStoreClient;
import org.apache.hadoop.hive.metastore.api.Database;
import org.apache.hadoop.hive.metastore.api.MetaException;
import org.apache.hadoop.hive.metastore.api.NoSuchObjectException;
import org.apache.hive.hcatalog.common.HCatUtil;
import org.apache.thrift.TException;

import javax.xml.crypto.Data;
import java.io.IOException;
import java.util.HashMap;

public class HiveMetaStoreClientTest {
public static void main(String args) {

HiveConf hiveConf = null;
HiveMetaStoreClient hiveMetaStoreClient = null;
String dbName = null;

try {
hiveConf = HCatUtil.getHiveConf(new Configuration());
hiveMetaStoreClient = new HiveMetaStoreClient(hiveConf);

dbName = args[0];

getDatabase(hiveMetaStoreClient, dbName);


} catch (MetaException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} catch (NoSuchObjectException e) {
e.printStackTrace();
System.out.println("===============");
System.out.println("database " + args[0] + "not exists");
System.out.println("===============");
createDatabase(hiveMetaStoreClient, dbName);
try {
getDatabase(hiveMetaStoreClient, dbName);
} catch (TException e1) {
e1.printStackTrace();
System.out.println("TMD");
}
} catch (TException e) {
e.printStackTrace();
}
}

public static Database getDatabase(HiveMetaStoreClient hiveMetaStoreClient, String dbName) throws TException {
Database database = null;

database = hiveMetaStoreClient.getDatabase(dbName);

System.out.println(database.getLocationUri());
System.out.println(database.getOwnerName());

for (String key : database.getParameters().keySet()) {
System.out.println(key + " = " + database.getParameters().get(key));
}
return database;
}

public static Database createDatabase(HiveMetaStoreClient hiveMetaStoreClient, String dbName) {
HashMap<String, String> map = new HashMap<String,String>();
Database database = new Database(dbName, "desc", null, map);
try {
hiveMetaStoreClient.createDatabase(database);
} catch (TException e) {
e.printStackTrace();
System.out.println("some error");
}
return database;
}
}






share|improve this answer














share|improve this answer



share|improve this answer








edited Oct 12 '16 at 5:15

























answered Oct 1 '16 at 14:46









Ram GhadiyaramRam Ghadiyaram

17.4k84879




17.4k84879








  • 1





    Thanks @RamPrasad G very clear example.

    – user2895589
    Oct 1 '16 at 19:29











  • I'm confused about part_vals in getPartition, if I have a location /user/hive/warehouse/db1.db/table1/time=20170616 and I want to get this partition, what is the content of part_vals arg ?

    – Gary Gauh
    Jun 17 '17 at 8:53











  • @GaryGauh: time is partition column name and value is 20170616

    – Ram Ghadiyaram
    Jun 17 '17 at 11:14














  • 1





    Thanks @RamPrasad G very clear example.

    – user2895589
    Oct 1 '16 at 19:29











  • I'm confused about part_vals in getPartition, if I have a location /user/hive/warehouse/db1.db/table1/time=20170616 and I want to get this partition, what is the content of part_vals arg ?

    – Gary Gauh
    Jun 17 '17 at 8:53











  • @GaryGauh: time is partition column name and value is 20170616

    – Ram Ghadiyaram
    Jun 17 '17 at 11:14








1




1





Thanks @RamPrasad G very clear example.

– user2895589
Oct 1 '16 at 19:29





Thanks @RamPrasad G very clear example.

– user2895589
Oct 1 '16 at 19:29













I'm confused about part_vals in getPartition, if I have a location /user/hive/warehouse/db1.db/table1/time=20170616 and I want to get this partition, what is the content of part_vals arg ?

– Gary Gauh
Jun 17 '17 at 8:53





I'm confused about part_vals in getPartition, if I have a location /user/hive/warehouse/db1.db/table1/time=20170616 and I want to get this partition, what is the content of part_vals arg ?

– Gary Gauh
Jun 17 '17 at 8:53













@GaryGauh: time is partition column name and value is 20170616

– Ram Ghadiyaram
Jun 17 '17 at 11:14





@GaryGauh: time is partition column name and value is 20170616

– Ram Ghadiyaram
Jun 17 '17 at 11:14




















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f39807098%2fextract-hive-table-partition-in-spark-java%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Biblatex bibliography style without URLs when DOI exists (in Overleaf with Zotero bibliography)

ComboBox Display Member on multiple fields

Is it possible to collect Nectar points via Trainline?