Hudi同步Hive表报“HoodieException : Got runtime exception when hive syncing”错误的解决方法

1 问题描述

闯过第二关之后,普通的元数据同步基本就没什么问题了。但是当遇到下面这种场景时,同步再次“翻了车”:

如果在一个SparkSession下,先读取一个Hudi数据集,得到DataFrame,在进行一些数据转换之后将变换后DataFrame再次以Hudi的形式写入另一张表,此时,Hudi会在同步这张新表的元数据时离奇失败。而这类操作其实是最典型不过的ETL操作流程,所以你大概率会遇上这个问题。

这一问题并不在配置上,所有配置与前面完全一样,起初在遇到这一问题时,我试过修改各种操作与配置进行验证,可以非常确定的是:

在Glue里,同一个SparkSession内,只要前面存在读取或写入Hudi数据集的行为,后面二次写入新的Hudi数据集并同步元数据时就一定会出错!如果前面没有Hudi操作,同步就能成功。

2 错误信息

以下是程序日志中记录的错误信息:

Exception in User Class: org.apache.hudi.exception.HoodieException : Got runtime exception when hive syncing ....

如果try catch异常后, 给出的cause message是:

Failed to check if database exists xxx

以下是全部的异常堆栈信息:

ERROR [main] glue.ProcessLauncher (Logging.scala:logError(70)): Exception in User Class: org.apache.hudi.exception.HoodieException : Got runtime exception when hive syncing xxx
org.apache.hudi.hive.HiveSyncTool.syncHoodieTable(HiveSyncTool.java:122)
org.apache.hudi.HoodieSparkSqlWriter$.org$apache$hudi$HoodieSparkSqlWriter$$syncHive(HoodieSparkSqlWriter.scala:391)
org.apache.hudi.HoodieSparkSqlWriter$$anonfun$metaSync$2.apply(HoodieSparkSqlWriter.scala:440)
org.apache.hudi.HoodieSparkSqlWriter$$anonfun$metaSync$2.apply(HoodieSparkSqlWriter.scala:436)
scala.collection.mutable.HashSet.foreach(HashSet.scala:78)
org.apache.hudi.HoodieSparkSqlWriter$.metaSync(HoodieSparkSqlWriter.scala:436)
org.apache.hudi.HoodieSparkSqlWriter$.commitAndPerformPostOperations(HoodieSparkSqlWriter.scala:497)
org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:222)
org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:145)
org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:676)
org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:285)
org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:271)
org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:229)
... // 隐去无关的业务代码类
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:498)
com.amazonaws.services.glue.SparkProcessLauncherPlugin$class.invoke(ProcessLauncher.scala:45)
com.amazonaws.services.glue.ProcessLauncher$$anon$1.invoke(ProcessLauncher.scala:76)
com.amazonaws.services.glue.ProcessLauncher.launch(ProcessLauncher.scala:115)
com.amazonaws.services.glue.ProcessLauncher$.main(ProcessLauncher.scala:26)
com.amazonaws.services.glue.ProcessLauncher.main(ProcessLauncher.scala)

3 原因分析

首先,我们需要从异常堆栈中找到发生错误的原始位置,但是日志中给出的错误堆栈其实是不全的,准确的位置是在:org.apache.hudi.hive.HoodieHiveClient#doesDataBaseExist的346行:

在这里插入图片描述

显然,异常是在这个client变量试图根据数据库名获取数据库时就报错了。而实际情况是,代码中请求的数据库是存在的,问题一定是client连接不上metastore,将错误信息报成数据库不存在。那问题就转为这个client是什么?它是怎样工作的?

进一步浏览代码可以发现,这个client是这样定义和使用的:

public HoodieHiveClient(HiveSyncConfig cfg, HiveConf configuration, FileSystem fs) {
    ...
    private IMetaStoreClient client; // 71行
    ...
    this.client = Hive.get(configuration).getMSC(); // 91行
    ...
}

它是由Hive.get(configuration).getMSC()创建的,我们需要看一下Hive类的这两个方法以及一个重要的变量hiveDB

public class Hive {
  ...
  private static ThreadLocal<Hive> hiveDB = new ThreadLocal<Hive>() {
    @Override
    protected synchronized Hive initialValue() {
      return null;
    }

    @Override
    public synchronized void remove() {
      if (this.get() != null) {
        this.get().close();
      }
      super.remove();
    }
  };
  ...
  public static Hive get(HiveConf c) throws HiveException {
    Hive db = hiveDB.get();
    if (db == null ||
        (db.metaStoreClient != null && !db.metaStoreClient.isCompatibleWith(c))) {
      return get(c, true);
    }
    db.conf = c;
    return db;
  }
 ...
   public IMetaStoreClient getMSC() throws MetaException {
    // 创建metaStoreClient代码与错误分析无关,故省略。
    ...
    return metaStoreClient;
  }

对于这段代码,有一个地方非常重要,即:hiveDB是一个线程局部变量(ThreadLocal),其包裹一个Hive实例,每当我们通过Hive.get(configuration)方法获取Hive实例时,得到的都当前线程上的唯一Hive实例。这一信息将对理解最后给出的解决方法很有帮助。

在排查进行到这里之前,我一直将怀疑重点放在传入的Hive配置上,也就是这里的参数HiveConf c,因为我高度怀疑单独写没有问题,而一次读之后再写就有问题的最大可能是前一次的读操作改动或触发了某些Hive的配置,导致后续的写入因配置项不对而失败。但是经过层层的代码追查,我并未发现有任何问题,且通过比对日志中打印出的Hive Configuration信息,也未发现历次输出的配置有不一样的地方(其实后面的分析表明过程中还是有部分配置信息改写过,只是我打印的地方并不是改动后),所以调查进行到这里的时候就进入了“死胡同”。

为了找到新的线索,我将同步成功与失败的两份日志拿出来做了更细致的对比,终于发现了一处可疑的地方:

在成功同步元数据(去除读,只写入)作业的日志中,我发现了一条比同步失败的作业多出的一行信息:

在这里插入图片描述
即:

2021-04-12 08:21:08,752 INFO [main] hive.metastore (HiveMetaStoreClient.java:isCompatibleWith(291)): Mestastore configuration hive.metastore.warehouse.dir changed from /user/hive/warehouse to file:/tmp/spark-warehouse

我们看一下输出这行日志的isCompatibleWith方法:

  public boolean isCompatibleWith(HiveConf conf) {
    if (currentMetaVars == null) {
      return false; // recreate
    }
    boolean compatible = true;
    for (ConfVars oneVar : HiveConf.metaVars) {
      // Since metaVars are all of different types, use string for comparison
      String oldVar = currentMetaVars.get(oneVar.varname);
      String newVar = conf.get(oneVar.varname, "");
      if (oldVar == null ||
          (oneVar.isCaseSensitive() ? !oldVar.equals(newVar) : !oldVar.equalsIgnoreCase(newVar))) {
        LOG.info("Mestastore configuration " + oneVar.varname +
            " changed from " + oldVar + " to " + newVar);
        compatible = false;
      }
    }
    return compatible;

这个方法中,最为关键的地方在于包裹LOG.info的这个if判定逻辑,从执行结果推定:在同步成功的作业里,此处的if条件判定为true,因为hive.metastore.warehouse.dir这一项的值从/user/hive/warehouse改为了file:/tmp/spark-warehouse(关于这个配置项是在何时由哪里发起的变更操作,目前也还不清楚,主要是Glue的服务器环境实在不太方便进行远程调试),从而促使isCompatibleWith方法返回了false,整个if的判定的结果就成为了true,所以实际是使用了get(c,true)返回了一个新的Hive实例(这个参数true就是让Hive类创建一个新的Hive实例),就是说,恰好是这一处配置的变动,促使Hive生成了一个新的Hive实例!

而对于第二次Hudi操作,程序运行到这里的if后,由于没有任何HiveConf的改动,isCompatibleWith方法的返回值一定是true了,没有打印配置变更的日志也印证了这一点,这样整个if的判定值就是false了,所以返回的必定还是上次用过的Hive实例,显然这个上次就用过的Hive实例,它的metaStoreClient现在已经不能正常工作了,它无法联通metastore,Failed to check if database exists错误就是证明,至于为什么,我们从日志上无从得知(主要是Glue的服务器环境实在不太方便进行远程调试,故暂时搁置进一步的排查),但是解决方案其实已经呼之欲出了,那就是:

4 解决方案

每次在Hudi将要同步元数据前,我们让Hudi总是使用一个全新的Hive.metaStoreClient去连接metastore,而实现的方法也很简单,那就是调用一次Hive.closeCurrent()方法将当前线程上的Hive实例移除即可,后续Hudi在进行元数据同步时,会检测到实例为空,然后触发新实例的重建。之所以在Hudi源代码之外的地方调用Hive.closeCurrent()能生效,也是因为Hive实例(hiveDB)是一个线程局部变量,同线程任意位置均可方便地获取它的引用并进行操作!

关于Glue和Hudi集成的更多问题解决方法,请移步:AWS Glue集成Apache Hudi同步元数据深度历险(各类错误的填坑方案)


推荐:博主历时三年倾注大量心血创作的《大数据平台架构与原型实现:数据中台建设实战》一书已由知名IT图书品牌电子工业出版社博文视点出版发行,真诚推荐给每一位读者!点击《重磅推荐:建大数据平台太难了!给我发个工程原型吧!》了解图书详情,扫码进入京东手机购书页面!
在这里插入图片描述

bluishglc CSDN认证博客专家 CSDN博客专家 架构师
架构师,CSDN博客专家,14年IT系统开发和架构经验,对大数据、企业级应用架构、SaaS、分布式存储和领域驱动设计有丰富的实践经验。对Hadoop/Spark 生态系统有深入和广泛的研究,参与过Hadoop商业发行版的开发,目前负责企业数据中台的架构设计和开发工作,热衷函数式编程,著有《大数据平台架构与原型实现:数据中台建设实战》一书。
<p style="color: #313d54; font-family: &quot; helvetica neue&quot;,helvetica,&quot;hiragino sans gb&quot;,arial,sans-serif; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px; padding: 0px; margin: 0px;">随着互联网发展,数据不断膨胀,从刚开始关系型数据库到非关系型数据库,再到大数据技术,技术不断演进最终是随着数据膨胀而不断改变,最初数据仓库能解决我们问题,但是随着时代发展,企业已经不满足于数据仓库,希望有更强大技术来支撑数据存储,包括结构化,非结构化数据等,希望能够积累企业数据,从中挖掘出更大价值。基于这个背景,数据湖技术应运而生。</p> <p style="color: #313d54; font-family: &quot; helvetica neue&quot;,helvetica,&quot;hiragino sans gb&quot;,arial,sans-serif; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px; padding: 0px; margin: 0px;">本课程基于真实企业数据湖案例进行讲解,结合业务实现数据湖平台,让大家在实践中理解和掌握数据湖技术,未来数据湖需求也会不断加大,希望同学们抓住这个机遇。项目中将以热门互联网电商业务场景为案例讲解,具体分析指标包含:流量分析,订单分析,用户行为分析,营销分析,广告分析等,能承载海量数据实时分析,数据分析涵盖全端(PC、移动、小程序)应用。</p> <p style="color: #313d54; font-family: &quot; helvetica neue&quot;,helvetica,&quot;hiragino sans gb&quot;,arial,sans-serif; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px; padding: 0px; margin: 0px;">Apache Hudi代表Hadoop Upserts anD Incrementals,管理大型分析数据集在HDFS上存储。Hudi主要目是高效减少摄取过程中数据延迟。<br style="font-family: &quot; helvetica neue&quot;,helvetica,&quot;hiragino sans gb&quot;,arial,sans-serif; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding: 0px;" />Hudi出现解决了现有hadoop体系几个问题:1、HDFS可伸缩性限制 2、需要在Hadoop中更快地呈现数据 3、没有直接支持对现有数据更新和删除 4、快速ETL和建模 5、要检索所有更新记录,无论这些更新是添加到最近日期分区新记录还是对旧数据更新,Hudi都允许用户使用最后一个检查点时间戳,此过程不用执行扫描整个源表查询。</p> <p style="color: #313d54; font-family: &quot; helvetica neue&quot;,helvetica,&quot;hiragino sans gb&quot;,arial,sans-serif; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px; padding: 0px; margin: 0px;"> </p> <p style="color: #313d54; font-family: &quot; helvetica neue&quot;,helvetica,&quot;hiragino sans gb&quot;,arial,sans-serif; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px; padding: 0px; margin: 0px;">本课程包含技术: <br style="font-family: &quot; helvetica neue&quot;,helvetica,&quot;hiragino sans gb&quot;,arial,sans-serif; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding: 0px;" />开发工具为:IDEA、WebStorm <br style="font-family: &quot; helvetica neue&quot;,helvetica,&quot;hiragino sans gb&quot;,arial,sans-serif; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding: 0px;" />Flink1.9.0、Hudi<br style="font-family: &quot; helvetica neue&quot;,helvetica,&quot;hiragino sans gb&quot;,arial,sans-serif; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding: 0px;" />ClickHouse<br style="font-family: &quot; helvetica neue&quot;,helvetica,&quot;hiragino sans gb&quot;,arial,sans-serif; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding: 0px;" />Hadoop2.7.5 <br style="font-family: &quot; helvetica neue&quot;,helvetica,&quot;hiragino sans gb&quot;,arial,sans-serif; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding: 0px;" />Hbase2.2.6<br style="font-family: &quot; helvetica neue&quot;,helvetica,&quot;hiragino sans gb&quot;,arial,sans-serif; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding: 0px;" />Kafka2.1.0 <br style="font-family: &quot; helvetica neue&quot;,helvetica,&quot;hiragino sans gb&quot;,arial,sans-serif; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding: 0px;" />Hive2.2.0<br style="font-family: &quot; helvetica neue&quot;,helvetica,&quot;hiragino sans gb&quot;,arial,sans-serif; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding: 0px;" />HDFS、MapReduce<br style="font-family: &quot; helvetica neue&quot;,helvetica,&quot;hiragino sans gb&quot;,arial,sans-serif; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding: 0px;" />Spark、Zookeeper<br style="font-family: &quot; helvetica neue&quot;,helvetica,&quot;hiragino sans gb&quot;,arial,sans-serif; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding: 0px;" />Binlog、Canal、MySQL<br style="font-family: &quot; helvetica neue&quot;,helvetica,&quot;hiragino sans gb&quot;,arial,sans-serif; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding: 0px;" />SpringBoot2.0.2.RELEASE <br style="font-family: &quot; helvetica neue&quot;,helvetica,&quot;hiragino sans gb&quot;,arial,sans-serif; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding: 0px;" />SpringCloud Finchley.RELEASE<br style="font-family: &quot; helvetica neue&quot;,helvetica,&quot;hiragino sans gb&quot;,arial,sans-serif; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding: 0px;" />Vue.js、Nodejs、Highcharts<br style="font-family: &quot; helvetica neue&quot;,helvetica,&quot;hiragino sans gb&quot;,arial,sans-serif; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding: 0px;" />Linux Shell编程</p> <p style="color: #313d54; font-family: &quot; helvetica neue&quot;,helvetica,&quot;hiragino sans gb&quot;,arial,sans-serif; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px; padding: 0px; margin: 0px;">课程亮点: <br style="font-family: &quot; helvetica neue&quot;,helvetica,&quot;hiragino sans gb&quot;,arial,sans-serif; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding: 0px;" />1.与企业接轨、真实工业界产品 <br style="font-family: &quot; helvetica neue&quot;,helvetica,&quot;hiragino sans gb&quot;,arial,sans-serif; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding: 0px;" />2.ClickHouse高性能列式存储数据库 <br style="font-family: &quot; helvetica neue&quot;,helvetica,&quot;hiragino sans gb&quot;,arial,sans-serif; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding: 0px;" />3.大数据热门技术Flink<br style="font-family: &quot; helvetica neue&quot;,helvetica,&quot;hiragino sans gb&quot;,arial,sans-serif; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding: 0px;" />4.Flink join 实战 <br style="font-family: &quot; helvetica neue&quot;,helvetica,&quot;hiragino sans gb&quot;,arial,sans-serif; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding: 0px;" />5.Hudi数据湖技术<br style="font-family: &quot; helvetica neue&quot;,helvetica,&quot;hiragino sans gb&quot;,arial,sans-serif; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding: 0px;" />6.集成指标明细查询 <br style="font-family: &quot; helvetica neue&quot;,helvetica,&quot;hiragino sans gb&quot;,arial,sans-serif; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding: 0px;" />7.主流微服务后端系统 <br style="font-family: &quot; helvetica neue&quot;,helvetica,&quot;hiragino sans gb&quot;,arial,sans-serif; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding: 0px;" />8.数据库实时同步解决方案 <br style="font-family: &quot; helvetica neue&quot;,helvetica,&quot;hiragino sans gb&quot;,arial,sans-serif; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding: 0px;" />9.涵盖主流前端技术VUE+jQuery+Ajax+NodeJS <br style="font-family: &quot; helvetica neue&quot;,helvetica,&quot;hiragino sans gb&quot;,arial,sans-serif; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding: 0px;" />10.集成SpringCloud实现统一整合方案 <br style="font-family: &quot; helvetica neue&quot;,helvetica,&quot;hiragino sans gb&quot;,arial,sans-serif; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding: 0px;" />11.互联网大数据企业热门技术栈 <br style="font-family: &quot; helvetica neue&quot;,helvetica,&quot;hiragino sans gb&quot;,arial,sans-serif; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding: 0px;" />12.支持海量数据实时分析 <br style="font-family: &quot; helvetica neue&quot;,helvetica,&quot;hiragino sans gb&quot;,arial,sans-serif; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding: 0px;" />13.支持全端实时数据分析 <br style="font-family: &quot; helvetica neue&quot;,helvetica,&quot;hiragino sans gb&quot;,arial,sans-serif; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding: 0px;" />14.全程代码实操,提供全部代码和资料 <br style="font-family: &quot; helvetica neue&quot;,helvetica,&quot;hiragino sans gb&quot;,arial,sans-serif; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding: 0px;" />15.提供答疑和提供企业技术方案咨询</p> <p style="background-attachment: scroll; background-clip: border-box; background-color: #ffffff; background-image: none; background-origin: padding-box; background-position-x: 0%; background-position-y: 0%; background-repeat: repeat; background-size: auto; border-image-outset: 0; border-image-repeat: stretch; border-image-slice: 100%; border-image-source: none; border-image-width: 1; box-sizing: border-box; color: #333333; font-family: tahoma,微软雅黑,arial,宋体; font-size: 14px; font-style: normal; font-variant: normal; font-weight: 400; letter-spacing: normal; list-style-type: none; orphans: 2; outline-color: invert; outline-style: none; outline-width: 0px; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; vertical-align: baseline; -webkit-text-stroke-width: 0px; white-space: normal; widows: 2; word-spacing: 1px; padding: 0px; margin: 0px; border: 0px none currentColor;"><strong style="background-attachment: scroll; background-clip: border-box; background-color: transparent; background-image: none; background-origin: padding-box; background-position-x: 0%; background-position-y: 0%; background-repeat: repeat; background-size: auto; border-image-outset: 0; border-image-repeat: stretch; border-image-slice: 100%; border-image-source: none; border-image-width: 1; box-sizing: border-box; font-family: &quot; helvetica neue&quot;,helvetica,&quot;hiragino sans gb&quot;,arial,sans-serif; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding: 0px; border: 0px none currentColor;">企业一线架构师讲授,代码在老师指导下企业可以复用,提供企业解决方案。  </strong></p> <p style="background-attachment: scroll; background-clip: border-box; background-color: #ffffff; background-image: none; background-origin: padding-box; background-position-x: 0%; background-position-y: 0%; background-repeat: repeat; background-size: auto; border-image-outset: 0; border-image-repeat: stretch; border-image-slice: 100%; border-image-source: none; border-image-width: 1; box-sizing: border-box; color: #333333; font-family: tahoma,微软雅黑,arial,宋体; font-size: 14px; font-style: normal; font-variant: normal; font-weight: 400; letter-spacing: normal; list-style-type: none; orphans: 2; outline-color: invert; outline-style: none; outline-width: 0px; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; vertical-align: baseline; -webkit-text-stroke-width: 0px; white-space: normal; widows: 2; word-spacing: 1px; padding: 0px; margin: 0px; border: 0px none currentColor;"><strong style="background-attachment: scroll; background-clip: border-box; background-color: transparent; background-image: none; background-origin: padding-box; background-position-x: 0%; background-position-y: 0%; background-repeat: repeat; background-size: auto; border-image-outset: 0; border-image-repeat: stretch; border-image-slice: 100%; border-image-source: none; border-image-width: 1; box-sizing: border-box; font-family: &quot; helvetica neue&quot;,helvetica,&quot;hiragino sans gb&quot;,arial,sans-serif; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding: 0px; border: 0px none currentColor;">版权归作者所有,盗版将进行法律维权。</strong> </p> <p> </p>
相关推荐
©️2020 CSDN 皮肤主题: 数字20 设计师:CSDN官方博客 返回首页