<?xml version="1.0" encoding="utf-8" ?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
<channel>
<title>iameyamaskyのブログ</title>
<link>https://ameblo.jp/iameyamasky/</link>
<atom:link href="https://rssblog.ameba.jp/iameyamasky/rss20.xml" rel="self" type="application/rss+xml" />
<atom:link rel="hub" href="http://pubsubhubbub.appspot.com" />
<description>ブログの説明を入力します。</description>
<language>ja</language>
<item>
<title>非常强，批处理框架 Spring Batch 就该这么用！（场景实战）</title>
<description>
<![CDATA[ <h1>前言</h1><p>概念词就不多说了，我简单地介绍下 ， spring batch 是一个 方便使用的 较健全的 批处理 框架。</p><p>为什么说是方便使用的，因为这是 基于spring的一个框架，接入简单、易理解、流程分明。</p><p>为什么说是较健全的， 因为它提供了往常我们在对大批量数据进行处理时需要考虑到的 日志跟踪、事务粒度调配、可控执行、失败机制、重试机制、数据读写等。</p><h1>正文</h1><p>那么回到文章，我们该篇文章将会带来给大家的是什么？（结合实例讲解那是当然的）</p><p>从实现的业务场景来说，有以下两个：</p><ol><li><p>从 &nbsp;csv文件 读取数据，进行业务处理再存储</p></li><li><p>从 数据库 读取数据，进行业务处理再存储</p></li></ol><p>也就是平时经常遇到的数据清理或者数据过滤，又或者是数据迁移备份等等。大批量的数据，自己实现分批处理需要考虑的东西太多了，又不放心，那么使用 Spring Batch 框架 是一个很好的选择。</p><p>首先，在进入实例教程前，我们看看这次的实例里，我们使用springboot 整合spring batch 框架，要编码的东西有什么？</p><p>通过一张简单的图来了解：</p><p><img alt="" src="https://img-blog.csdnimg.cn/img_convert/d2dd484d5981d20d76e87785faacfdfb.png"></p><p>可能大家看到这个图，是不是多多少少想起来定时任务框架？确实有那么点像，但是我必须在这告诉大家，这是一个批处理框架，不是一个schuedling 框架。但是前面提到它提供了可执行控制，也就是说，啥时候执行是可控的，那么显然就是自己可以进行扩展结合定时任务框架，实现你心中所想。</p><p>ok，回到主题，相信大家能从图中简单明了地看到我们这次实例，需要实现的东西有什么了。所以我就不在对各个小组件进行大批量文字的描述了。</p><p>那么我们事不宜迟，开始我们的实例教程。</p><p>首先准备一个数据库，里面建一张简单的表，用于实例数据的写入存储或者说是读取等等。</p><p><strong>bloginfo表</strong></p><p><img alt="" src="https://img-blog.csdnimg.cn/img_convert/5fe3a5d20d99d8b7886cdd14005e8e32.png"></p><p>相关建表sql语句：</p><pre><code>CREATE&nbsp;TABLE&nbsp;`bloginfo`&nbsp;&nbsp;(&nbsp;&nbsp;`id`&nbsp;int(11)&nbsp;NOT&nbsp;NULL&nbsp;AUTO_INCREMENT&nbsp;COMMENT&nbsp;'主键',&nbsp;&nbsp;`blogAuthor`&nbsp;varchar(255)&nbsp;CHARACTER&nbsp;SET&nbsp;utf8&nbsp;COLLATE&nbsp;utf8_general_ci&nbsp;NULL&nbsp;DEFAULT&nbsp;NULL&nbsp;COMMENT&nbsp;'博客作者标识',&nbsp;&nbsp;`blogUrl`&nbsp;varchar(255)&nbsp;CHARACTER&nbsp;SET&nbsp;utf8&nbsp;COLLATE&nbsp;utf8_general_ci&nbsp;NULL&nbsp;DEFAULT&nbsp;NULL&nbsp;COMMENT&nbsp;'博客链接',&nbsp;&nbsp;`blogTitle`&nbsp;varchar(255)&nbsp;CHARACTER&nbsp;SET&nbsp;utf8&nbsp;COLLATE&nbsp;utf8_general_ci&nbsp;NULL&nbsp;DEFAULT&nbsp;NULL&nbsp;COMMENT&nbsp;'博客标题',&nbsp;&nbsp;`blogItem`&nbsp;varchar(255)&nbsp;CHARACTER&nbsp;SET&nbsp;utf8&nbsp;COLLATE&nbsp;utf8_general_ci&nbsp;NULL&nbsp;DEFAULT&nbsp;NULL&nbsp;COMMENT&nbsp;'博客栏目',&nbsp;&nbsp;PRIMARY&nbsp;KEY&nbsp;(`id`)&nbsp;USING&nbsp;BTREE)&nbsp;ENGINE&nbsp;=&nbsp;InnoDB&nbsp;AUTO_INCREMENT&nbsp;=&nbsp;89031&nbsp;CHARACTER&nbsp;SET&nbsp;=&nbsp;utf8&nbsp;COLLATE&nbsp;=&nbsp;utf8_general_ci&nbsp;ROW_FORMAT&nbsp;=&nbsp;Dynamic;</code></pre><p>pom文件里的核心依赖：</p><pre><code>&lt;dependency&gt;&nbsp;&nbsp;&nbsp;&nbsp;&lt;groupId&gt;org.springframework.boot&lt;/groupId&gt;&nbsp;&nbsp;&nbsp;&nbsp;&lt;artifactId&gt;spring-boot-starter-web&lt;/artifactId&gt;&lt;/dependency&gt;&lt;dependency&gt;&nbsp;&nbsp;&nbsp;&nbsp;&lt;groupId&gt;org.springframework.boot&lt;/groupId&gt;&nbsp;&nbsp;&nbsp;&nbsp;&lt;artifactId&gt;spring-boot-starter-test&lt;/artifactId&gt;&nbsp;&nbsp;&nbsp;&nbsp;&lt;scope&gt;test&lt;/scope&gt;&lt;/dependency&gt;&lt;!--&nbsp;&nbsp;spring&nbsp;batch&nbsp;--&gt;&lt;dependency&gt;&nbsp;&nbsp;&nbsp;&nbsp;&lt;groupId&gt;org.springframework.boot&lt;/groupId&gt;&nbsp;&nbsp;&nbsp;&nbsp;&lt;artifactId&gt;spring-boot-starter-batch&lt;/artifactId&gt;&lt;/dependency&gt;&lt;!--&nbsp;hibernate&nbsp;validator&nbsp;--&gt;&lt;dependency&gt;&nbsp;&nbsp;&nbsp;&nbsp;&lt;groupId&gt;org.hibernate&lt;/groupId&gt;&nbsp;&nbsp;&nbsp;&nbsp;&lt;artifactId&gt;hibernate-validator&lt;/artifactId&gt;&nbsp;&nbsp;&nbsp;&nbsp;&lt;version&gt;6.0.7.Final&lt;/version&gt;&lt;/dependency&gt;&lt;!--&nbsp;&nbsp;mybatis&nbsp;--&gt;&lt;dependency&gt;&nbsp;&nbsp;&nbsp;&nbsp;&lt;groupId&gt;org.mybatis.spring.boot&lt;/groupId&gt;&nbsp;&nbsp;&nbsp;&nbsp;&lt;artifactId&gt;mybatis-spring-boot-starter&lt;/artifactId&gt;&nbsp;&nbsp;&nbsp;&nbsp;&lt;version&gt;2.0.0&lt;/version&gt;&lt;/dependency&gt;&lt;!--&nbsp;&nbsp;mysql&nbsp;--&gt;&lt;dependency&gt;&nbsp;&nbsp;&nbsp;&nbsp;&lt;groupId&gt;mysql&lt;/groupId&gt;&nbsp;&nbsp;&nbsp;&nbsp;&lt;artifactId&gt;mysql-connector-java&lt;/artifactId&gt;&nbsp;&nbsp;&nbsp;&nbsp;&lt;scope&gt;runtime&lt;/scope&gt;&lt;/dependency&gt;&lt;!--&nbsp;druid数据源驱动&nbsp;1.1.10解决springboot从1.0——2.0版本问题--&gt;&lt;dependency&gt;&nbsp;&nbsp;&nbsp;&nbsp;&lt;groupId&gt;com.alibaba&lt;/groupId&gt;&nbsp;&nbsp;&nbsp;&nbsp;&lt;artifactId&gt;druid-spring-boot-starter&lt;/artifactId&gt;&nbsp;&nbsp;&nbsp;&nbsp;&lt;version&gt;1.1.18&lt;/version&gt;&lt;/dependency&gt;</code></pre><p>yml文件：</p><pre><code>spring:&nbsp;&nbsp;batch:&nbsp;&nbsp;&nbsp;&nbsp;job:#设置为&nbsp;false&nbsp;-需要jobLaucher.run执行&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;enabled:&nbsp;false&nbsp;&nbsp;&nbsp;&nbsp;initialize-schema:&nbsp;always#&nbsp;&nbsp;&nbsp;&nbsp;table-prefix:&nbsp;my-batch&nbsp;&nbsp;&nbsp;datasource:&nbsp;&nbsp;&nbsp;&nbsp;druid:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;username:&nbsp;root&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;password:&nbsp;root&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;url:&nbsp;jdbc:mysql://localhost:3306/hellodemo?useSSL=false&amp;useUnicode=true&amp;characterEncoding=UTF-8&amp;serverTimezone=GMT%2B8&amp;zeroDateTimeBehavior=convertToNull&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;driver-class-name:&nbsp;com.mysql.cj.jdbc.Driver&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;initialSize:&nbsp;5&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;minIdle:&nbsp;5&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;maxActive:&nbsp;20&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;maxWait:&nbsp;60000&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;timeBetweenEvictionRunsMillis:&nbsp;60000&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;minEvictableIdleTimeMillis:&nbsp;300000&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;validationQuery:&nbsp;SELECT&nbsp;1&nbsp;FROM&nbsp;DUAL&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;testWhileIdle:&nbsp;true&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;testOnBorrow:&nbsp;false&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;testOnReturn:&nbsp;false&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;poolPreparedStatements:&nbsp;true&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;maxPoolPreparedStatementPerConnectionSize:&nbsp;20&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;useGlobalDataSourceStat:&nbsp;true&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;connectionProperties:&nbsp;druid.stat.mergeSql=true;druid.stat.slowSqlMillis=5000server:&nbsp;&nbsp;port:&nbsp;8665</code></pre><p><img alt="" src="https://img-blog.csdnimg.cn/img_convert/36d9e528ce167cd2d1edb5334c2a46b2.png"></p><blockquote><p>ps：这里我们用到了druid数据库连接池，其实有个小坑，后面文章会讲到。</p></blockquote><p>因为我们这次的实例最终数据处理完之后，是写入数据库存储（当然你也可以输出到文件等等）。</p><p>所以我们前面也建了一张表，pom文件里面我们也整合的mybatis，那么我们在整合spring batch 主要编码前，我们先把这些关于数据库打通用到的简单过一下。</p><p>pojo 层</p><p>BlogInfo.java ：</p><pre><code>/**&nbsp;*&nbsp;@Author&nbsp;:&nbsp;JCccc&nbsp;*&nbsp;@Description&nbsp;:&nbsp;**/public&nbsp;class&nbsp;BlogInfo&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;private&nbsp;Integer&nbsp;id;&nbsp;&nbsp;&nbsp;&nbsp;private&nbsp;String&nbsp;blogAuthor;&nbsp;&nbsp;&nbsp;&nbsp;private&nbsp;String&nbsp;blogUrl;&nbsp;&nbsp;&nbsp;&nbsp;private&nbsp;String&nbsp;blogTitle;&nbsp;&nbsp;&nbsp;&nbsp;private&nbsp;String&nbsp;blogItem;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;@Override&nbsp;&nbsp;&nbsp;&nbsp;public&nbsp;String&nbsp;toString()&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;"BlogInfo{"&nbsp;+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"id="&nbsp;+&nbsp;id&nbsp;+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;",&nbsp;blogAuthor='"&nbsp;+&nbsp;blogAuthor&nbsp;+&nbsp;'\''&nbsp;+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;",&nbsp;blogUrl='"&nbsp;+&nbsp;blogUrl&nbsp;+&nbsp;'\''&nbsp;+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;",&nbsp;blogTitle='"&nbsp;+&nbsp;blogTitle&nbsp;+&nbsp;'\''&nbsp;+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;",&nbsp;blogItem='"&nbsp;+&nbsp;blogItem&nbsp;+&nbsp;'\''&nbsp;+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;'}';&nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;public&nbsp;Integer&nbsp;getId()&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;id;&nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;public&nbsp;void&nbsp;setId(Integer&nbsp;id)&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;this.id&nbsp;=&nbsp;id;&nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;public&nbsp;String&nbsp;getBlogAuthor()&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;blogAuthor;&nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;public&nbsp;void&nbsp;setBlogAuthor(String&nbsp;blogAuthor)&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;this.blogAuthor&nbsp;=&nbsp;blogAuthor;&nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;public&nbsp;String&nbsp;getBlogUrl()&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;blogUrl;&nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;public&nbsp;void&nbsp;setBlogUrl(String&nbsp;blogUrl)&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;this.blogUrl&nbsp;=&nbsp;blogUrl;&nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;public&nbsp;String&nbsp;getBlogTitle()&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;blogTitle;&nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;public&nbsp;void&nbsp;setBlogTitle(String&nbsp;blogTitle)&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;this.blogTitle&nbsp;=&nbsp;blogTitle;&nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;public&nbsp;String&nbsp;getBlogItem()&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;blogItem;&nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;public&nbsp;void&nbsp;setBlogItem(String&nbsp;blogItem)&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;this.blogItem&nbsp;=&nbsp;blogItem;&nbsp;&nbsp;&nbsp;&nbsp;}}</code></pre><p>mapper层</p><p>BlogMapper.java ：</p><blockquote><p>ps：可以看到这个实例我用的是注解的方式，哈哈为了省事，而且我还不写servcie层和impl层，也是为了省事，因为该篇文章重点不在这些，所以这些不好的大家不要学。</p></blockquote><pre><code>import&nbsp;com.example.batchdemo.pojo.BlogInfo;import&nbsp;org.apache.ibatis.annotations.*;import&nbsp;java.util.List;import&nbsp;java.util.Map;&nbsp;/**&nbsp;*&nbsp;@Author&nbsp;:&nbsp;JCccc&nbsp;*&nbsp;@Description&nbsp;:&nbsp;**/@Mapperpublic&nbsp;interface&nbsp;BlogMapper&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;@Insert("INSERT&nbsp;INTO&nbsp;bloginfo&nbsp;(&nbsp;blogAuthor,&nbsp;blogUrl,&nbsp;blogTitle,&nbsp;blogItem&nbsp;)&nbsp;&nbsp;&nbsp;VALUES&nbsp;(&nbsp;#{blogAuthor},&nbsp;#{blogUrl},#{blogTitle},#{blogItem})&nbsp;")&nbsp;&nbsp;&nbsp;&nbsp;@Options(useGeneratedKeys&nbsp;=&nbsp;true,&nbsp;keyProperty&nbsp;=&nbsp;"id")&nbsp;&nbsp;&nbsp;&nbsp;int&nbsp;insert(BlogInfo&nbsp;bloginfo);&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;@Select("select&nbsp;blogAuthor,&nbsp;blogUrl,&nbsp;blogTitle,&nbsp;blogItem&nbsp;from&nbsp;bloginfo&nbsp;where&nbsp;blogAuthor&nbsp;&lt;&nbsp;#{authorId}")&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;List&lt;BlogInfo&gt;&nbsp;queryInfoById(Map&lt;String&nbsp;,&nbsp;Integer&gt;&nbsp;map);&nbsp;}</code></pre><p>接下来 ，重头戏，我们开始对前边那张图里涉及到的各个小组件进行编码。</p><p>首先创建一个 配置类，&nbsp;<code>MyBatchConfig.java</code>：</p><p>从我起名来看，可以知道这基本就是咱们整合spring batch 涉及到的一些配置组件都会写在这里了。</p><p>首先我们按照咱们上面的图来看，里面包含内容有：</p><pre><code>JobRepository&nbsp;job的注册/存储器JobLauncher&nbsp;job的执行器&nbsp;Job&nbsp;job任务，包含一个或多个StepStep&nbsp;包含（ItemReader、ItemProcessor和ItemWriter)&nbsp;ItemReader&nbsp;数据读取器&nbsp;ItemProcessor&nbsp;数据处理器ItemWriter&nbsp;数据输出器</code></pre><p>首先，在MyBatchConfig类前加入注解：</p><p><code>@Configuration</code>&nbsp;&nbsp;用于告诉spring，咱们这个类是一个自定义配置类，里面很多bean都需要加载到spring容器里面</p><p><code>@EnableBatchProcessing</code>&nbsp;开启批处理支持</p><p><img alt="" src="https://img-blog.csdnimg.cn/img_convert/d424d1418e03d842d0d4b219529c040d.png"></p><p>然后开始往MyBatchConfig类里，编写各个小组件。</p><p>JobRepository</p><p>写在MyBatchConfig类里</p><pre><code>/**&nbsp;* JobRepository定义：Job的注册容器以及和数据库打交道（事务管理等）&nbsp;*&nbsp;@param&nbsp;dataSource&nbsp;*&nbsp;@param&nbsp;transactionManager&nbsp;*&nbsp;@return&nbsp;*&nbsp;@throws&nbsp;Exception&nbsp;*/@Beanpublic&nbsp;JobRepository&nbsp;myJobRepository(DataSource&nbsp;dataSource,&nbsp;PlatformTransactionManager&nbsp;transactionManager)&nbsp;throws&nbsp;Exception{&nbsp;&nbsp;&nbsp;&nbsp;JobRepositoryFactoryBean&nbsp;jobRepositoryFactoryBean&nbsp;=&nbsp;new&nbsp;JobRepositoryFactoryBean();&nbsp;&nbsp;&nbsp;&nbsp;jobRepositoryFactoryBean.setDatabaseType("mysql");&nbsp;&nbsp;&nbsp;&nbsp;jobRepositoryFactoryBean.setTransactionManager(transactionManager);&nbsp;&nbsp;&nbsp;&nbsp;jobRepositoryFactoryBean.setDataSource(dataSource);&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;jobRepositoryFactoryBean.getObject();}</code></pre><p>JobLauncher</p><p>写在MyBatchConfig类里</p><pre><code>/**&nbsp;* jobLauncher定义：job的启动器,绑定相关的jobRepository&nbsp;*&nbsp;@param&nbsp;dataSource&nbsp;*&nbsp;@param&nbsp;transactionManager&nbsp;*&nbsp;@return&nbsp;*&nbsp;@throws&nbsp;Exception&nbsp;*/@Beanpublic&nbsp;SimpleJobLauncher&nbsp;myJobLauncher(DataSource&nbsp;dataSource,&nbsp;PlatformTransactionManager&nbsp;transactionManager)&nbsp;throws&nbsp;Exception{&nbsp;&nbsp;&nbsp;&nbsp;SimpleJobLauncher&nbsp;jobLauncher&nbsp;=&nbsp;new&nbsp;SimpleJobLauncher();&nbsp;&nbsp;&nbsp;&nbsp;//&nbsp;设置jobRepository&nbsp;&nbsp;&nbsp;&nbsp;jobLauncher.setJobRepository(myJobRepository(dataSource,&nbsp;transactionManager));&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;jobLauncher;}</code></pre><p>Job</p><p>写在MyBatchConfig类里</p><pre><code>/**&nbsp;*&nbsp;定义job&nbsp;*&nbsp;@param&nbsp;jobs&nbsp;*&nbsp;@param&nbsp;myStep&nbsp;*&nbsp;@return&nbsp;*/@Beanpublic&nbsp;Job&nbsp;myJob(JobBuilderFactory&nbsp;jobs,&nbsp;Step&nbsp;myStep){&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;jobs.get("myJob")&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;.incrementer(new&nbsp;RunIdIncrementer())&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;.flow(myStep)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;.end()&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;.listener(myJobListener())&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;.build();}</code></pre><p>对于Job的运行，是可以配置监听器的</p><p>JobListener</p><p>写在MyBatchConfig类里</p><pre><code>/**&nbsp;*&nbsp;注册job监听器&nbsp;*&nbsp;@return&nbsp;*/@Beanpublic&nbsp;MyJobListener&nbsp;myJobListener(){&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;new&nbsp;MyJobListener();}</code></pre><p>这是一个我们自己自定义的监听器，所以是单独创建的，<code>MyJobListener.java</code>：</p><pre><code>/**&nbsp;*&nbsp;@Author&nbsp;:&nbsp;JCccc&nbsp;*&nbsp;@Description&nbsp;:监听Job执行情况，实现JobExecutorListener，且在batch配置类里，Job的Bean上绑定该监听器&nbsp;**/&nbsp;public&nbsp;class&nbsp;MyJobListener&nbsp;implements&nbsp;JobExecutionListener&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;private&nbsp;Logger&nbsp;logger&nbsp;=&nbsp;LoggerFactory.getLogger(MyJobListener.class);&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;@Override&nbsp;&nbsp;&nbsp;&nbsp;public&nbsp;void&nbsp;beforeJob(JobExecution&nbsp;jobExecution)&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;logger.info("job&nbsp;开始,&nbsp;id={}",jobExecution.getJobId());&nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;@Override&nbsp;&nbsp;&nbsp;&nbsp;public&nbsp;void&nbsp;afterJob(JobExecution&nbsp;jobExecution)&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;logger.info("job&nbsp;结束,&nbsp;id={}",jobExecution.getJobId());&nbsp;&nbsp;&nbsp;&nbsp;}}</code></pre><p>Step（ItemReader &nbsp; ItemProcessor &nbsp; ItemWriter）</p><p>step里面包含数据读取器，数据处理器，数据输出器三个小组件的的实现。</p><p>我们也是一个个拆解来进行编写。</p><p>文章前边说到，该篇实现的场景包含两种，一种是从csv文件读入大量数据进行处理，另一种是从数据库表读入大量数据进行处理。</p><p>从CSV文件读取数据</p><p>ItemReader</p><p>写在MyBatchConfig类里</p><pre><code>/**&nbsp;* ItemReader定义：读取文件数据+entirty实体类映射&nbsp;*&nbsp;@return&nbsp;*/@Beanpublic&nbsp;ItemReader&lt;BlogInfo&gt;&nbsp;reader(){&nbsp;&nbsp;&nbsp;&nbsp;//&nbsp;使用FlatFileItemReader去读cvs文件，一行即一条数据&nbsp;&nbsp;&nbsp;&nbsp;FlatFileItemReader&lt;BlogInfo&gt;&nbsp;reader&nbsp;=&nbsp;new&nbsp;FlatFileItemReader&lt;&gt;();&nbsp;&nbsp;&nbsp;&nbsp;//&nbsp;设置文件处在路径&nbsp;&nbsp;&nbsp;&nbsp;reader.setResource(new&nbsp;ClassPathResource("static/bloginfo.csv"));&nbsp;&nbsp;&nbsp;&nbsp;//&nbsp;entity与csv数据做映射&nbsp;&nbsp;&nbsp;&nbsp;reader.setLineMapper(new&nbsp;DefaultLineMapper&lt;BlogInfo&gt;()&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;setLineTokenizer(new&nbsp;DelimitedLineTokenizer()&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;setNames(new&nbsp;String[]{"blogAuthor","blogUrl","blogTitle","blogItem"});&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;});&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;setFieldSetMapper(new&nbsp;BeanWrapperFieldSetMapper&lt;BlogInfo&gt;()&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;setTargetType(BlogInfo.class);&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;});&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;&nbsp;&nbsp;&nbsp;});&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;reader;}</code></pre><p>简单代码解析：</p><p><img alt="" src="https://img-blog.csdnimg.cn/img_convert/3a1152b176d401b6f7a1858210432754.png"></p><p>对于数据读取器 ItemReader ，我们给它安排了一个读取监听器，创建&nbsp;<code>MyReadListener.java</code>&nbsp;：</p><pre><code>/**&nbsp;*&nbsp;@Author&nbsp;:&nbsp;JCccc&nbsp;*&nbsp;@Description&nbsp;:&nbsp;**/&nbsp;public&nbsp;class&nbsp;MyReadListener&nbsp;implements&nbsp;ItemReadListener&lt;BlogInfo&gt;&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;private&nbsp;Logger&nbsp;logger&nbsp;=&nbsp;LoggerFactory.getLogger(MyReadListener.class);&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;@Override&nbsp;&nbsp;&nbsp;&nbsp;public&nbsp;void&nbsp;beforeRead()&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;@Override&nbsp;&nbsp;&nbsp;&nbsp;public&nbsp;void&nbsp;afterRead(BlogInfo&nbsp;item)&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;@Override&nbsp;&nbsp;&nbsp;&nbsp;public&nbsp;void&nbsp;onReadError(Exception&nbsp;ex)&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;try&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;logger.info(format("%s%n",&nbsp;ex.getMessage()));&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;catch&nbsp;(Exception&nbsp;e)&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;e.printStackTrace();&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;&nbsp;&nbsp;&nbsp;}}</code></pre><p>ItemProcessor</p><p>写在MyBatchConfig类里</p><pre><code>/**&nbsp;*&nbsp;注册ItemProcessor:&nbsp;处理数据+校验数据&nbsp;*&nbsp;@return&nbsp;*/@Beanpublic&nbsp;ItemProcessor&lt;BlogInfo,&nbsp;BlogInfo&gt;&nbsp;processor(){&nbsp;&nbsp;&nbsp;&nbsp;MyItemProcessor&nbsp;myItemProcessor&nbsp;=&nbsp;new&nbsp;MyItemProcessor();&nbsp;&nbsp;&nbsp;&nbsp;//&nbsp;设置校验器&nbsp;&nbsp;&nbsp;&nbsp;myItemProcessor.setValidator(myBeanValidator());&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;myItemProcessor;}</code></pre><p>数据处理器，是我们自定义的，里面主要是包含我们对数据处理的业务逻辑，并且我们设置了一些数据校验器，我们这里使用 JSR-303的Validator来作为校验器。</p><p>校验器</p><p>写在MyBatchConfig类里</p><pre><code>/**&nbsp;*&nbsp;注册校验器&nbsp;*&nbsp;@return&nbsp;*/@Beanpublic&nbsp;MyBeanValidator&nbsp;myBeanValidator(){&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;new&nbsp;MyBeanValidator&lt;BlogInfo&gt;();}</code></pre><p>创建<code>MyItemProcessor.java</code>&nbsp;：</p><blockquote><p>ps：里面我的数据处理逻辑是，获取出读取数据里面的每条数据的blogItem字段，如果是springboot，那就对title字段值进行替换。</p></blockquote><p>其实也就是模拟一个简单地数据处理场景。</p><pre><code>import&nbsp;com.example.batchdemo.pojo.BlogInfo;import&nbsp;org.springframework.batch.item.validator.ValidatingItemProcessor;import&nbsp;org.springframework.batch.item.validator.ValidationException;&nbsp;/**&nbsp;*&nbsp;@Author&nbsp;:&nbsp;JCccc&nbsp;*&nbsp;@Description&nbsp;:&nbsp;**/public&nbsp;class&nbsp;MyItemProcessor&nbsp;extends&nbsp;ValidatingItemProcessor&lt;BlogInfo&gt;&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;@Override&nbsp;&nbsp;&nbsp;&nbsp;public&nbsp;BlogInfo&nbsp;process(BlogInfo&nbsp;item)&nbsp;throws&nbsp;ValidationException&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;/**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;需要执行super.process(item)才会调用自定义校验器&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*/&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;super.process(item);&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;/**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;对数据进行简单的处理&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*/&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(item.getBlogItem().equals("springboot"))&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;item.setBlogTitle("springboot&nbsp;系列还请看看我Jc");&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;else&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;item.setBlogTitle("未知系列");&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;item;&nbsp;&nbsp;&nbsp;&nbsp;}}</code></pre><p>创建MyBeanValidator.java：</p><pre><code>import&nbsp;org.springframework.batch.item.validator.ValidationException;import&nbsp;org.springframework.batch.item.validator.Validator;import&nbsp;org.springframework.beans.factory.InitializingBean;import&nbsp;javax.validation.ConstraintViolation;import&nbsp;javax.validation.Validation;import&nbsp;javax.validation.ValidatorFactory;import&nbsp;java.util.Set;&nbsp;/**&nbsp;*&nbsp;@Author&nbsp;:&nbsp;JCccc&nbsp;*&nbsp;@Description&nbsp;:&nbsp;**/public&nbsp;class&nbsp;MyBeanValidator&lt;T&gt;&nbsp;implements&nbsp;Validator&lt;T&gt;,&nbsp;InitializingBean&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;private&nbsp;javax.validation.Validator&nbsp;validator;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;@Override&nbsp;&nbsp;&nbsp;&nbsp;public&nbsp;void&nbsp;validate(T&nbsp;value)&nbsp;throws&nbsp;ValidationException&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;/**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;使用Validator的validate方法校验数据&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*/&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Set&lt;ConstraintViolation&lt;T&gt;&gt;&nbsp;constraintViolations&nbsp;=&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;validator.validate(value);&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(constraintViolations.size()&nbsp;&gt;&nbsp;0)&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;StringBuilder&nbsp;message&nbsp;=&nbsp;new&nbsp;StringBuilder();&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;for&nbsp;(ConstraintViolation&lt;T&gt;&nbsp;constraintViolation&nbsp;:&nbsp;constraintViolations)&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;message.append(constraintViolation.getMessage()&nbsp;+&nbsp;"\n");&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;throw&nbsp;new&nbsp;ValidationException(message.toString());&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;/**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;使用JSR-303的Validator来校验我们的数据，在此进行JSR-303的Validator的初始化&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;@throws&nbsp;Exception&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*/&nbsp;&nbsp;&nbsp;&nbsp;@Override&nbsp;&nbsp;&nbsp;&nbsp;public&nbsp;void&nbsp;afterPropertiesSet()&nbsp;throws&nbsp;Exception&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;ValidatorFactory&nbsp;validatorFactory&nbsp;=&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Validation.buildDefaultValidatorFactory();&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;validator&nbsp;=&nbsp;validatorFactory.usingContext().getValidator();&nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;}</code></pre><blockquote><p>ps：其实该篇文章没有使用这个数据校验器，大家想使用的话，可以在实体类上添加一些校验器的注解@NotNull @Max @Email等等。我偏向于直接在处理器里面进行处理，想把关于数据处理的代码都写在一块。</p></blockquote><p>ItemWriter</p><p>写在MyBatchConfig类里</p><pre><code>/**&nbsp;* ItemWriter定义：指定datasource，设置批量插入sql语句，写入数据库&nbsp;*&nbsp;@param&nbsp;dataSource&nbsp;*&nbsp;@return&nbsp;*/@Beanpublic&nbsp;ItemWriter&lt;BlogInfo&gt;&nbsp;writer(DataSource&nbsp;dataSource){&nbsp;&nbsp;&nbsp;&nbsp;//&nbsp;使用jdbcBcatchItemWrite写数据到数据库中&nbsp;&nbsp;&nbsp;&nbsp;JdbcBatchItemWriter&lt;BlogInfo&gt;&nbsp;writer&nbsp;=&nbsp;new&nbsp;JdbcBatchItemWriter&lt;&gt;();&nbsp;&nbsp;&nbsp;&nbsp;//&nbsp;设置有参数的sql语句&nbsp;&nbsp;&nbsp;&nbsp;writer.setItemSqlParameterSourceProvider(new&nbsp;BeanPropertyItemSqlParameterSourceProvider&lt;BlogInfo&gt;());&nbsp;&nbsp;&nbsp;&nbsp;String&nbsp;sql&nbsp;=&nbsp;"insert&nbsp;into&nbsp;bloginfo&nbsp;"+"&nbsp;(blogAuthor,blogUrl,blogTitle,blogItem)&nbsp;"&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;+"&nbsp;values(:blogAuthor,:blogUrl,:blogTitle,:blogItem)";&nbsp;&nbsp;&nbsp;&nbsp;writer.setSql(sql);&nbsp;&nbsp;&nbsp;&nbsp;writer.setDataSource(dataSource);&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;writer;}</code></pre><p>简单代码解析：</p><p><img alt="" src="https://img-blog.csdnimg.cn/img_convert/e25eafc58d4f10f0e3e146e17aa9252e.png"></p><p>同样 对于数据读取器 ItemWriter ，我们给它也安排了一个输出监听器，创建&nbsp;<code>MyWriteListener.java</code>：</p><pre><code>import&nbsp;com.example.batchdemo.pojo.BlogInfo;import&nbsp;org.slf4j.Logger;import&nbsp;org.slf4j.LoggerFactory;import&nbsp;org.springframework.batch.core.ItemWriteListener;import&nbsp;java.util.List;import&nbsp;static&nbsp;java.lang.String.format;&nbsp;/**&nbsp;*&nbsp;@Author&nbsp;:&nbsp;JCccc&nbsp;*&nbsp;@Description&nbsp;:&nbsp;**/public&nbsp;class&nbsp;MyWriteListener&nbsp;implements&nbsp;ItemWriteListener&lt;BlogInfo&gt;&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;private&nbsp;Logger&nbsp;logger&nbsp;=&nbsp;LoggerFactory.getLogger(MyWriteListener.class);&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;@Override&nbsp;&nbsp;&nbsp;&nbsp;public&nbsp;void&nbsp;beforeWrite(List&lt;?&nbsp;extends&nbsp;BlogInfo&gt;&nbsp;items)&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;@Override&nbsp;&nbsp;&nbsp;&nbsp;public&nbsp;void&nbsp;afterWrite(List&lt;?&nbsp;extends&nbsp;BlogInfo&gt;&nbsp;items)&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;@Override&nbsp;&nbsp;&nbsp;&nbsp;public&nbsp;void&nbsp;onWriteError(Exception&nbsp;exception,&nbsp;List&lt;?&nbsp;extends&nbsp;BlogInfo&gt;&nbsp;items)&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;try&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;logger.info(format("%s%n",&nbsp;exception.getMessage()));&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;for&nbsp;(BlogInfo&nbsp;message&nbsp;:&nbsp;items)&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;logger.info(format("Failed&nbsp;writing&nbsp;BlogInfo&nbsp;:&nbsp;%s",&nbsp;message.toString()));&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;catch&nbsp;(Exception&nbsp;e)&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;e.printStackTrace();&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}}</code></pre><p><code>ItemReader</code>、<code>ItemProcessor</code>、<code>ItemWriter</code>，这三个小组件到这里，我们都实现了，那么接下来就是把这三个小组件跟我们的step去绑定起来。</p><p>写在MyBatchConfig类里</p><pre><code>/**&nbsp;* step定义：&nbsp;*&nbsp;包括&nbsp;*&nbsp;ItemReader&nbsp;读取&nbsp;*&nbsp;ItemProcessor&nbsp;&nbsp;处理&nbsp;*&nbsp;ItemWriter&nbsp;输出&nbsp;*&nbsp;@param&nbsp;stepBuilderFactory&nbsp;*&nbsp;@param&nbsp;reader&nbsp;*&nbsp;@param&nbsp;writer&nbsp;*&nbsp;@param&nbsp;processor&nbsp;*&nbsp;@return&nbsp;*/@Beanpublic&nbsp;Step&nbsp;myStep(StepBuilderFactory&nbsp;stepBuilderFactory,&nbsp;ItemReader&lt;BlogInfo&gt;&nbsp;reader,&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;ItemWriter&lt;BlogInfo&gt;&nbsp;writer,&nbsp;ItemProcessor&lt;BlogInfo,&nbsp;BlogInfo&gt;&nbsp;processor){&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;stepBuilderFactory&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;.get("myStep")&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;.&lt;BlogInfo,&nbsp;BlogInfo&gt;chunk(65000)&nbsp;//&nbsp;Chunk的机制(即每次读取一条数据，再处理一条数据，累积到一定数量后再一次性交给writer进行写入操作)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;.reader(reader).faultTolerant().retryLimit(3).retry(Exception.class).skip(Exception.class).skipLimit(2)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;.listener(new&nbsp;MyReadListener())&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;.processor(processor)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;.writer(writer).faultTolerant().skip(Exception.class).skipLimit(2)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;.listener(new&nbsp;MyWriteListener())&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;.build();}</code></pre><p>这个Step，稍作讲解。</p><p>前边提到了，spring batch框架，提供了事务的控制，重启，检测跳过等等机制。</p><p>那么，这些东西的实现，很多都在于这个step环节的设置。</p><p>首先看到我们代码出现的第一个设置，<code>chunk( 6500 )&nbsp;</code>，Chunk的机制(即每次读取一条数据，再处理一条数据，累积到一定数量后再一次性交给writer进行写入操作。</p><p>没错，对于整个step环节，就是数据的读取，处理最后到输出。</p><p>这个chunk机制里，我们传入的 6500，也就是是告诉它，读取处理数据，累计达到 6500条进行一次批次处理，去执行写入操作。</p><p>这个传值，是根据具体业务而定，可以是500条一次，1000条一次，也可以是20条一次，50条一次。</p><p>通过一张简单的小图来帮助理解：</p><p><img alt="" src="https://img-blog.csdnimg.cn/img_convert/5fb39e7680bdb935559ad016fe89670d.png"></p><p>在我们大量数据处理，不管是读取或者说是写入，都肯定会涉及到一些未知或者已知因素导致某条数据失败了。</p><p>那么如果说咱们啥也不设置，失败一条数据，那么我们就当作整个失败了？。显然这个太不人性，所以spring batch 提供了 retry 和 skip 两个设置（其实还有restart） ，通过这两个设置来人性化地解决一些数据操作失败场景。</p><pre><code>retryLimit(3).retry(Exception.class)&nbsp;&nbsp;</code></pre><p>没错，这个就是设置重试，当出现异常的时候，重试多少次。我们设置为3，也就是说当一条数据操作失败，那我们会对这条数据进行重试3次，还是失败就是 当做失败了， 那么我们如果有配置skip（推荐配置使用），那么这个数据失败记录就会留到给 skip 来处理。</p><pre><code>skip(Exception.class).skipLimit(2)&nbsp;&nbsp;</code></pre><p>skip，跳过，也就是说我们如果设置3， 那么就是可以容忍 3条数据的失败。只有达到失败数据达到3次，我们才中断这个step。</p><p>对于失败的数据，我们做了相关的监听器以及异常信息记录，供与后续手动补救。</p><p>那么记下来我们开始去调用这个批处理job，我们通过接口去触发这个批处理事件，新建一个Controller，<code>TestController.java</code>：</p><pre><code>/**&nbsp;*&nbsp;@Author&nbsp;:&nbsp;JCccc&nbsp;*&nbsp;@Description&nbsp;:&nbsp;**/@RestControllerpublic&nbsp;class&nbsp;TestController&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;@Autowired&nbsp;&nbsp;&nbsp;&nbsp;SimpleJobLauncher&nbsp;jobLauncher;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;@Autowired&nbsp;&nbsp;&nbsp;&nbsp;Job&nbsp;myJob;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;@GetMapping("testJob")&nbsp;&nbsp;&nbsp;&nbsp;public&nbsp;&nbsp;void&nbsp;testJob()&nbsp;throws&nbsp;JobParametersInvalidException,&nbsp;JobExecutionAlreadyRunningException,&nbsp;JobRestartException,&nbsp;JobInstanceAlreadyCompleteException&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;//&nbsp;&nbsp;&nbsp;&nbsp;后置参数：使用JobParameters中绑定参数 addLong  addString 等方法&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;JobParameters&nbsp;jobParameters&nbsp;=&nbsp;new&nbsp;JobParametersBuilder().toJobParameters();&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;jobLauncher.run(myJob,&nbsp;jobParameters);&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}}</code></pre><p>对了，我准备了一个csv文件&nbsp;<code>bloginfo.csv</code>，里面大概8万多条数据，用来进行批处理测试：</p><p>&nbsp;</p><p><img alt="" src="https://img-blog.csdnimg.cn/img_convert/0cca9532f8c826013b805bd6d2089bda.png"></p><p>这个文件的路径跟我们的数据读取器里面读取的路径要一直，</p><p><img alt="" src="https://img-blog.csdnimg.cn/img_convert/6c10047b2d3af991cfbc175d924281f6.png"></p><p><img alt="" src="https://img-blog.csdnimg.cn/img_convert/aa0e81e38d85c9174e2df66529e59e69.png"></p><p>目前我们数据库是这个样子，</p><p><img alt="" src="https://img-blog.csdnimg.cn/img_convert/aca8c38d39c118040dce210e72ca9b66.png"></p><p>接下来我们把我们的项目启动起来，再看一眼数据库，生成了一些batch用来跟踪记录job的一些数据表：</p><p><img alt="" src="https://img-blog.csdnimg.cn/img_convert/ede3cdfa6b7b03191207f6fe21a57718.png"></p><p>我们来调用一下testJob接口，</p><p><img alt="" src="https://img-blog.csdnimg.cn/img_convert/7a3f93e02093b555283d85c8c94ca6bf.png"></p><p>然后看下数据库，可以看的数据全部都进行了相关的逻辑处理并插入到了数据库：</p><p><img alt="" src="https://img-blog.csdnimg.cn/img_convert/a28f8017dd956276a4d9b628b03485c1.png"></p><p>到这里，我们对Springboot 整合 spring batch 其实已经操作完毕了，也实现了从csv文件读取数据处理存储的业务场景。</p><p>从数据库读取数据</p><blockquote><p>ps：前排提示使用druid有坑。后面会讲到。</p></blockquote><p>那么接下来实现场景，从数据库表内读取数据进行处理输出到新的表里面。</p><p>那么基于我们上边的整合，我们已经实现了</p><pre><code>JobRepository&nbsp;job的注册/存储器JobLauncher&nbsp;job的执行器&nbsp;Job&nbsp;job任务，包含一个或多个StepStep&nbsp;包含（ItemReader、ItemProcessor和ItemWriter)&nbsp;ItemReader&nbsp;数据读取器&nbsp;ItemProcessor&nbsp;数据处理器ItemWriter&nbsp;数据输出器job&nbsp;监听器reader&nbsp;监听器writer&nbsp;监听器process&nbsp;数据校验器</code></pre><p>那么对于我们新写一个job完成 一个新的场景，我们需要全部重写么？</p><p>显然没必要，当然完全新写一套也是可以的。</p><p>那么该篇，对于一个新的也出场景，从csv文件读取数据转换到数据库表读取数据，我们重新新建的有：</p><ol><li><p><strong>数据读取器：</strong>&nbsp;&nbsp;原先使用的是&nbsp;<code>FlatFileItemReader</code>&nbsp;，我们现在改为使用&nbsp;<code>MyBatisCursorItemReader</code></p></li><li><p><strong>数据处理器：</strong>&nbsp;&nbsp;新的场景，业务为了好扩展，所以我们处理器最好也新建一个</p></li><li><p><strong>数据输出器：</strong>&nbsp;&nbsp; &nbsp;新的场景，业务为了好扩展，所以我们数据输出器最好也新建一个</p></li><li><p><strong>step的绑定设置：</strong>&nbsp;新的场景，业务为了好扩展，所以我们step最好也新建一个</p></li><li><p><strong>Job：</strong>&nbsp;&nbsp;当然是要重新写一个了</p></li></ol><p>其他我们照用原先的就行，JobRepository，JobLauncher以及各种监听器啥的，暂且不重新建了。</p><p>新建<code>MyItemProcessorNew.java</code>：</p><pre><code>import&nbsp;org.springframework.batch.item.validator.ValidatingItemProcessor;import&nbsp;org.springframework.batch.item.validator.ValidationException;&nbsp;/**&nbsp;*&nbsp;@Author&nbsp;:&nbsp;JCccc&nbsp;*&nbsp;@Description&nbsp;:&nbsp;**/public&nbsp;class&nbsp;MyItemProcessorNew&nbsp;extends&nbsp;ValidatingItemProcessor&lt;BlogInfo&gt;&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;@Override&nbsp;&nbsp;&nbsp;&nbsp;public&nbsp;BlogInfo&nbsp;process(BlogInfo&nbsp;item)&nbsp;throws&nbsp;ValidationException&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;/**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;需要执行super.process(item)才会调用自定义校验器&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*/&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;super.process(item);&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;/**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;对数据进行简单的处理&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*/&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Integer&nbsp;authorId=&nbsp;Integer.valueOf(item.getBlogAuthor());&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(authorId&lt;20000)&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;item.setBlogTitle("这是都是小于20000的数据");&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;else&nbsp;if&nbsp;(authorId&gt;20000&nbsp;&amp;&amp;&nbsp;authorId&lt;30000){&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;item.setBlogTitle("这是都是小于30000但是大于20000的数据");&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}else&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;item.setBlogTitle("旧书不厌百回读");&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;item;&nbsp;&nbsp;&nbsp;&nbsp;}}</code></pre><p>然后其他重新定义的小组件，写在MyBatchConfig类里：</p><pre><code>/**&nbsp;*&nbsp;定义job&nbsp;*&nbsp;@param&nbsp;jobs&nbsp;*&nbsp;@param&nbsp;stepNew&nbsp;*&nbsp;@return&nbsp;*/@Beanpublic&nbsp;Job&nbsp;myJobNew(JobBuilderFactory&nbsp;jobs,&nbsp;Step&nbsp;stepNew){&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;jobs.get("myJobNew")&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;.incrementer(new&nbsp;RunIdIncrementer())&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;.flow(stepNew)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;.end()&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;.listener(myJobListener())&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;.build();}@Beanpublic&nbsp;Step&nbsp;stepNew(StepBuilderFactory&nbsp;stepBuilderFactory,&nbsp;MyBatisCursorItemReader&lt;BlogInfo&gt;&nbsp;itemReaderNew,&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;ItemWriter&lt;BlogInfo&gt;&nbsp;writerNew,&nbsp;ItemProcessor&lt;BlogInfo,&nbsp;BlogInfo&gt;&nbsp;processorNew){&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;stepBuilderFactory&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;.get("stepNew")&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;.&lt;BlogInfo,&nbsp;BlogInfo&gt;chunk(65000)&nbsp;//&nbsp;Chunk的机制(即每次读取一条数据，再处理一条数据，累积到一定数量后再一次性交给writer进行写入操作)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;.reader(itemReaderNew).faultTolerant().retryLimit(3).retry(Exception.class).skip(Exception.class).skipLimit(10)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;.listener(new&nbsp;MyReadListener())&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;.processor(processorNew)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;.writer(writerNew).faultTolerant().skip(Exception.class).skipLimit(2)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;.listener(new&nbsp;MyWriteListener())&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;.build();}@Beanpublic&nbsp;ItemProcessor&lt;BlogInfo,&nbsp;BlogInfo&gt;&nbsp;processorNew(){&nbsp;&nbsp;&nbsp;&nbsp;MyItemProcessorNew&nbsp;csvItemProcessor&nbsp;=&nbsp;new&nbsp;MyItemProcessorNew();&nbsp;&nbsp;&nbsp;&nbsp;//&nbsp;设置校验器&nbsp;&nbsp;&nbsp;&nbsp;csvItemProcessor.setValidator(myBeanValidator());&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;csvItemProcessor;}@Autowiredprivate&nbsp;SqlSessionFactory&nbsp;sqlSessionFactory;@Bean@StepScope//Spring Batch提供了一个特殊的bean scope类（StepScope:作为一个自定义的Spring bean scope）。这个step scope的作用是连接batches的各个steps。这个机制允许配置在Spring的beans当steps开始时才实例化并且允许你为这个step指定配置和参数。public&nbsp;MyBatisCursorItemReader&lt;BlogInfo&gt;&nbsp;itemReaderNew(@Value("#{jobParameters[authorId]}")&nbsp;String&nbsp;authorId)&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;System.out.println("开始查询数据库");&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;MyBatisCursorItemReader&lt;BlogInfo&gt;&nbsp;reader&nbsp;=&nbsp;new&nbsp;MyBatisCursorItemReader&lt;&gt;();&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;reader.setQueryId("com.example.batchdemo.mapper.BlogMapper.queryInfoById");&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;reader.setSqlSessionFactory(sqlSessionFactory);&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Map&lt;String&nbsp;,&nbsp;Object&gt;&nbsp;map&nbsp;=&nbsp;new&nbsp;HashMap&lt;&gt;();&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;map.put("authorId"&nbsp;,&nbsp;Integer.valueOf(authorId));&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;reader.setParameterValues(map);&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;reader;}/**&nbsp;* ItemWriter定义：指定datasource，设置批量插入sql语句，写入数据库&nbsp;*&nbsp;@param&nbsp;dataSource&nbsp;*&nbsp;@return&nbsp;*/@Beanpublic&nbsp;ItemWriter&lt;BlogInfo&gt;&nbsp;writerNew(DataSource&nbsp;dataSource){&nbsp;&nbsp;&nbsp;&nbsp;//&nbsp;使用jdbcBcatchItemWrite写数据到数据库中&nbsp;&nbsp;&nbsp;&nbsp;JdbcBatchItemWriter&lt;BlogInfo&gt;&nbsp;writer&nbsp;=&nbsp;new&nbsp;JdbcBatchItemWriter&lt;&gt;();&nbsp;&nbsp;&nbsp;&nbsp;//&nbsp;设置有参数的sql语句&nbsp;&nbsp;&nbsp;&nbsp;writer.setItemSqlParameterSourceProvider(new&nbsp;BeanPropertyItemSqlParameterSourceProvider&lt;BlogInfo&gt;());&nbsp;&nbsp;&nbsp;&nbsp;String&nbsp;sql&nbsp;=&nbsp;"insert&nbsp;into&nbsp;bloginfonew&nbsp;"+"&nbsp;(blogAuthor,blogUrl,blogTitle,blogItem)&nbsp;"&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;+"&nbsp;values(:blogAuthor,:blogUrl,:blogTitle,:blogItem)";&nbsp;&nbsp;&nbsp;&nbsp;writer.setSql(sql);&nbsp;&nbsp;&nbsp;&nbsp;writer.setDataSource(dataSource);&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;writer;}</code></pre><h3>代码需要注意的点</h3><p>数据读取器&nbsp;<code>MyBatisCursorItemReader</code></p><p><img alt="" src="https://img-blog.csdnimg.cn/img_convert/95b46d1eab9754866af8caf510d4dae4.png"></p><p>对应的mapper方法：</p><p><img alt="" src="https://img-blog.csdnimg.cn/img_convert/e3c4038b8d78d0813a774d256aa4d3b7.png"></p><p>数据处理器 MyItemProcessorNew：</p><p><img alt="" src="https://img-blog.csdnimg.cn/img_convert/ba87704df3f11f9422c2862dddc76038.png"></p><p>数据输出器，新插入到别的数据库表去，特意这样为了测试：</p><p><img alt="" src="https://img-blog.csdnimg.cn/img_convert/276b46fafdb7b9c600d259ea865d6f8a.png"></p><p>当然我们的数据库为了测试这个场景，也是新建了一张表，bloginfonew 表。</p><p><img alt="" src="https://img-blog.csdnimg.cn/img_convert/d8227d126fc401830af3f8716c7e511b.png"></p><p>接下来，我们新写一个接口来执行新的这个job：</p><p><img alt="" src="https://img-blog.csdnimg.cn/img_convert/0a65b1f16beb12b813ae2f94cc9bf0e0.png"></p><pre><code>@AutowiredSimpleJobLauncher&nbsp;jobLauncher;@AutowiredJob&nbsp;myJobNew;@GetMapping("testJobNew")public&nbsp;&nbsp;void&nbsp;testJobNew(@RequestParam("authorId")&nbsp;String&nbsp;authorId)&nbsp;throws&nbsp;JobParametersInvalidException,&nbsp;JobExecutionAlreadyRunningException,&nbsp;JobRestartException,&nbsp;JobInstanceAlreadyCompleteException&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;JobParameters&nbsp;jobParametersNew&nbsp;=&nbsp;new&nbsp;JobParametersBuilder().addLong("timeNew",&nbsp;System.currentTimeMillis())&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;.addString("authorId",authorId)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;.toJobParameters();&nbsp;&nbsp;&nbsp;&nbsp;jobLauncher.run(myJobNew,jobParametersNew);}</code></pre><p>ok，我们来调用一些这个接口：</p><p><img alt="" src="https://img-blog.csdnimg.cn/img_convert/8ec3d56b2378585dc8b76684006c4352.png"></p><p>看下控制台：</p><p><img alt="" src="https://img-blog.csdnimg.cn/img_convert/260a066e03fb8a582e5e08b98004db44.png"></p><p>没错，这就是失败的，原因是因为跟druid有关，报了一个数据库功能不支持。这是在数据读取的时候报的错。</p><p>我初步测试认为是<code>MyBatisCursorItemReader</code>&nbsp;，druid 数据库连接池不支持。</p><p>那么，我们只需要：</p><ol><li><p>注释掉druid连接池 jar依赖</p></li></ol><p><img alt="" src="https://img-blog.csdnimg.cn/img_convert/bf5ad892c8af15cdba105d894893f7e2.png"></p><ol><li><p>yml里替换连接池配置</p></li></ol><p><img alt="" src="https://img-blog.csdnimg.cn/img_convert/531838f69222443fdfd7bf2cb7e46385.png"></p><p>其实我们不配置其他连接池，springboot 2.X 版本已经为我们整合了默认的连接池 HikariCP 。</p><blockquote><p>在Springboot2.X版本，数据库的连接池官方推荐使用HikariCP</p></blockquote><p>如果不是为了druid的那些后台监控数据，sql分析等等，完全是优先使用HikariCP的。</p><p>官方的原话：</p><blockquote><blockquote><p>We preferHikariCPfor its performance and concurrency. If HikariCP is available, we always choose it.</p></blockquote></blockquote><p>翻译：</p><blockquote><p>我们更喜欢hikaricpf的性能和并发性。如果有HikariCP，我们总是选择它。</p></blockquote><p>所以我们就啥连接池也不配了，使用默认的HikariCP 连接池。</p><p>当然你想配，也是可以的：</p><p><img alt="" src="https://img-blog.csdnimg.cn/img_convert/c8216b43d92931159bb0ca11913db00c.png"></p><p>所以我们剔除掉druid链接池后，我们再来调用一下新接口：</p><p><img alt="" src="https://img-blog.csdnimg.cn/img_convert/85dec4743631bb936465147869a54435.png"></p><p>可以看到，从数据库获取数据并进行批次处理写入job是成功的：</p><p><img alt="" src="https://img-blog.csdnimg.cn/img_convert/8768c0741e2311cf83fb63d22bd66ac3.png"></p><p>新的表里面插入的数据都进行了自己写的逻辑处理：</p><p><img alt="" src="https://img-blog.csdnimg.cn/img_convert/7040d10f49e2c082176e0efb00b6abb2.png"></p><p>好了，springboot 整合 spring batch 批处理框架， 就到此吧。</p><p><img alt="非常强，批处理框架 Spring Batch 就该这么用！（场景实战）" src="https://img-proxy.blog-video.jp/images?url=http%3A%2F%2Fpic.xiahunao.cn%2Fwd%2F%25E9%259D%259E%25E5%25B8%25B8%25E5%25BC%25BA%25EF%25BC%258C%25E6%2589%25B9%25E5%25A4%2584%25E7%2590%2586%25E6%25A1%2586%25E6%259E%25B6%2520Spring%2520Batch%2520%25E5%25B0%25B1%25E8%25AF%25A5%25E8%25BF%2599%25E4%25B9%2588%25E7%2594%25A8%25EF%25BC%2581%25EF%25BC%2588%25E5%259C%25BA%25E6%2599%25AF%25E5%25AE%259E%25E6%2588%2598%25EF%25BC%2589"></p>
]]>
</description>
<link>https://ameblo.jp/iameyamasky/entry-12788375291.html</link>
<pubDate>Thu, 09 Feb 2023 01:48:09 +0900</pubDate>
</item>
<item>
<title>在项目中使用到了 mybatisplus的批量插入方法</title>
<description>
<![CDATA[ <p>问题描述：<br>在项目中使用到了 mybatisplus的批量插入方法，底层来自<br>com.baomidou.mybatisplus.extension.service;<br>默认的batchSize不填就是1000.<br>但是在实际测试的情况下，调用的batchSave方法 批量insert 1000条数据耗时大概20秒左右，这非常的不科学。<br>20秒多不多，我批量80000条数据分批执行接近半小时。。。<br><br>解决方案：<br>先给大家结论，就是我们文章的开头提到的参数 rewriteBatchedStatements。<br>在jdbc连接上加入rewriteBatchedStatements=true即可以实现多条更新语句合并提交给mysql(合并条数需要看batchSize设置)。<br><br>jdbc:mysql://xxx.xx.cn:3306/test?zeroDateTimeBehavior=convertToNull&amp;useUnicode=true&amp;characterEncoding=UTF-8&amp;rewriteBatchedStatements=true<br>1<br>加上rewriteBatchedStatements后实际测试1000条数据的耗时 20秒 -》 400ms。 这才是正常的。<br>&nbsp;</p>
]]>
</description>
<link>https://ameblo.jp/iameyamasky/entry-12788374080.html</link>
<pubDate>Thu, 09 Feb 2023 01:21:14 +0900</pubDate>
</item>
<item>
<title>Mybatis-plus批量插入和批量修改数据速度缓慢</title>
<description>
<![CDATA[ <p>Mybatis-plus批量插入和批量修改数据速度缓慢<br>一、Mybatis-plus批量插入和批量修改数据速度缓慢<br>1、代码<br>2、解决办法<br>一、Mybatis-plus批量插入和批量修改数据速度缓慢<br>1.使用mybaits-plus的saveBatch方法<br>2.使用流的并行方法：insertList.parallelStream().map(）<br>3.一条条直接保存<br>得到的结果都是很慢<br><br>1、代码<br>&nbsp; &nbsp; @Override<br>&nbsp; &nbsp; @Transactional//如果出现异常mybatis-plus事务回滚<br>&nbsp; &nbsp; public List&lt;SchoolStudent&gt; addSchoolStudent1(List&lt;SchoolStudent&gt; schoolStudentList) {<br>&nbsp; &nbsp; &nbsp; &nbsp; for (int i = 100; i &lt; 300; i++) {<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; SchoolStudent schoolStudent = new SchoolStudent();<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; schoolStudent.setId(i);<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; schoolStudentList.add(schoolStudent);<br>&nbsp; &nbsp; &nbsp; &nbsp; }<br>&nbsp; &nbsp; &nbsp; &nbsp; this.saveBatch(schoolStudentList);<br><br>&nbsp; &nbsp; &nbsp; &nbsp; return schoolStudentList;<br>&nbsp; &nbsp; }<br>1<br>2<br>3<br>4<br>5<br>6<br>7<br>8<br>9<br>10<br>11<br>12<br>用mybatis批量插入近200条的数据大概接口响应用时8s<br><br><br>2、解决办法<br>给MySQL数据库连接加上相应参数，便将批量插入速度大大提升，接口响应速度不到2s<br><br>rewriteBatchedstatements=true<br>1<br><br><br>&nbsp;</p>
]]>
</description>
<link>https://ameblo.jp/iameyamasky/entry-12788373962.html</link>
<pubDate>Thu, 09 Feb 2023 01:18:57 +0900</pubDate>
</item>
<item>
<title>MyBatisPlus 批量插入速度慢问题解决方案</title>
<description>
<![CDATA[ <h1 id="articleContentId">1、saveBatch批量插入等批量操作耗时特别长，对于上万条数据更是十几秒<br>答：先说解决方案：jdbc URL后追加参数 rewriteBatchedStatements=true，让多个insert/update/delete语句同一批次提交，而不是分开多次提交，除此之外，还需要自定义insert sql，不使用MP的saveXXX<br>2、原理<br>mysql jdbc driver发布文档，从3.1.13开始加入了该功能：https://dev.mysql.com/doc/relnotes/connector-j/5.1/en/news-3-1-13.html<br><br>添加了性能功能，重写批处理执行 Statement.executeBatch()（对于所有 DML 语句）和 PreparedStatement.executeBatch()（仅对于具有 VALUE 子句的 INSERT）。通过在 JDBC URL 中使用“rewriteBatchedStatements=true”来启用。（错误＃18041）<br><br>doc文档：https://dev.mysql.com/doc/connectors/en/connector-j-connp-props-performance-extensions.html#cj-conn-prop_rewriteBatchedStatements<br><br>文档截图：<br><br><br>3、MP源码解析<br><br><br>如果不传每次批量提交，默认每1000提交一次<br>INSERT_ONE("insert", "插入一条数据（选择字段插入）", "&lt;script&gt;\nINSERT INTO %s %s VALUES %s\n&lt;/script&gt;"),<br>public boolean saveBatch(Collection&lt;T&gt; entityList, int batchSize) {<br>&nbsp; &nbsp; String sqlStatement = getSqlStatement(SqlMethod.INSERT_ONE);<br>&nbsp; &nbsp; return executeBatch(entityList, batchSize, (sqlSession, entity) -&gt; sqlSession.insert(sqlStatement, entity));<br>}<br>1<br>2<br>3<br>4<br>5<br>可以看到，MP其实是在循环里面逐个insert的，达到批次阈值，就刷到数据库<br><br>&nbsp;</h1>
]]>
</description>
<link>https://ameblo.jp/iameyamasky/entry-12788373899.html</link>
<pubDate>Thu, 09 Feb 2023 01:17:43 +0900</pubDate>
</item>
<item>
<title>Mybatis-plus批量插入、批量修改数据saveBatch等速度缓慢</title>
<description>
<![CDATA[ <h3><a data-pretit="mybatis" data-report-click="{&quot;spm&quot;:&quot;1001.2101.3001.7020&quot;,&quot;dest&quot;:&quot;https://so.csdn.net/so/search?q=Mybatis&amp;spm=1001.2101.3001.7020&quot;,&quot;extra&quot;:&quot;{\&quot;searchword\&quot;:\&quot;Mybatis\&quot;}&quot;}" data-tit="Mybatis" href="https://so.csdn.net/so/search?q=Mybatis&amp;spm=1001.2101.3001.7020" target="_blank">Mybatis</a>-plus批量插入、批量修改数据saveBatch等速度缓慢</h3><ul><li><ul><li><a href="https://blog.csdn.net/zzzgd_666/article/details/109604186?ops_request_misc=%257B%2522request%255Fid%2522%253A%2522163815702416780366515691%2522%252C%2522scm%2522%253A%252220140713.130102334.pc%255Fblog.%2522%257D&amp;request_id=163815702416780366515691&amp;biz_id=0&amp;utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_v2~rank_v29-12-109604186.pc_v2_rank_blog_default&amp;utm_term=Mybatis+Plus%E6%89%B9%E9%87%8F%E5%A4%84%E7%90%86%E6%9C%89%E5%88%99%E4%BF%AE%E6%94%B9%E6%97%A0%E5%88%99%E6%96%B0%E5%A2%9E&amp;spm=1018.2226.3001.4450#_2" title="背景">背景</a></li><li><a href="https://blog.csdn.net/zzzgd_666/article/details/109604186?ops_request_misc=%257B%2522request%255Fid%2522%253A%2522163815702416780366515691%2522%252C%2522scm%2522%253A%252220140713.130102334.pc%255Fblog.%2522%257D&amp;request_id=163815702416780366515691&amp;biz_id=0&amp;utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_v2~rank_v29-12-109604186.pc_v2_rank_blog_default&amp;utm_term=Mybatis+Plus%E6%89%B9%E9%87%8F%E5%A4%84%E7%90%86%E6%9C%89%E5%88%99%E4%BF%AE%E6%94%B9%E6%97%A0%E5%88%99%E6%96%B0%E5%A2%9E&amp;spm=1018.2226.3001.4450#_15" title="处理">处理</a></li></ul></li></ul><p>&nbsp;</p><blockquote><p><a href="https://www.cnblogs.com/arebirth/p/mybatissavebatchslow.html" title="Arebirth博客园: 原文链接">Arebirth博客园: 原文链接</a></p></blockquote><h2><a name="t1"></a><a name="t1" title=""></a><a id="_2" title=""></a>背景</h2><p>使用mysqlPlus. 不管是<code>updateBatch</code>, 还是<code>saveBatch</code>, 800条左右的数据,<br>耗时都超过1s以上</p><p>尝试更改每次批量处理的数量, 比如:</p><pre data-index="0"><code>super.updateBatchById(list,1000);</code></pre><ul><li>1</li></ul><p>如果不传第二个参数, mysqlPlus默认是1000. 这个根据调整, 发现低于1000, 耗时增加,<code>1500</code>到<code>2500</code>ms左右, 因为sql分多次执行, 中间IO请求的耗时比较大.<br>设置为1000(因为总共测试数据都没有1000, 所以没有尝试更大的值), 为<code>900</code>ms到<code>1200</code>ms左右.</p><p>这时间太久了,在网上找办法.</p><h2><a name="t2"></a><a name="t2" title=""></a><a id="_15" title=""></a>处理</h2><p>最后发现在sql链接后追加:&nbsp;<code>rewriteBatchedStatements=true</code>. 再次尝试, 已经缩减到<code>100</code>ms!</p><pre data-index="1"></pre><ol><li><p><code>url: jdbc:mysql://localhost:3306/zgd?useUnicode=true&amp;characterEncoding=utf-8&amp;zeroDateTimeBehavior=convertToNull&amp;rewriteBatchedStatements=true</code></p></li><li>&nbsp;</li></ol><ul><li>1</li><li>2</li></ul><p><img alt="在这里插入图片描述" src="https://img-blog.csdnimg.cn/20201110181828643.png#pic_center"></p>
]]>
</description>
<link>https://ameblo.jp/iameyamasky/entry-12788373826.html</link>
<pubDate>Thu, 09 Feb 2023 01:16:50 +0900</pubDate>
</item>
<item>
<title>排雷日记 -- mybatisplus分页查询效率</title>
<description>
<![CDATA[ <h1 id="articleContentId">雷区场景：仍然是某2G+2C的项目用户管理模块，比较普通的增删改查功能。近日被吐嘈打开人员列表（用户数据量在70-80万之间）展示页面超慢！有时还会请求超时。。。<br><br>&nbsp;<br><br>初步排查：NetWork找到了响应慢为获取人员列表数据的接口，响应时间竟然超过了10s，看来实施的同学心态普遍都是极好了！！！然后安排了开发兄弟直接排查日志及查询脚本，确实也查到了一些关键问题，比如下面的查询脚本：<br><br><br><br>实际执行的脚本为：<br><br>select * from &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;<br>(SELECT bu.*, cp.name AS companyName, cp.id AS companyId&nbsp;<br>FROM xxx_user bu&nbsp;<br>LEFT JOIN xc_company_user cu ON cu.user_id = bu.user_id&nbsp;<br>LEFT JOIN xc_company cp ON cp.id = cu.company_id&nbsp;<br>LEFT JOIN xxx_dept bd ON bd.id = bu.dept_id&nbsp;<br>WHERE bu.is_deleted = 0) t&nbsp;<br>limit 0, 100;<br><br>而系统中集成了MybatisPlus框架，采用了自带的分页查询模式。所以在xml脚本中没有写入limit脚本，运行过程中mp会在脚本末尾自动附加上limit脚本。开发人员认为是该sql语句写的有问题，没有必要在外层又套了一个select t.* from的壳，认为内部的select做的是全表查询，所以修改了查询脚本为：<br><br>SELECT bu.*, cp.name AS companyName, cp.id AS companyId&nbsp;<br>FROM xxx_user bu&nbsp;<br>LEFT JOIN xc_company_user cu ON cu.user_id = bu.user_id&nbsp;<br>LEFT JOIN xc_company cp ON cp.id = cu.company_id&nbsp;<br>LEFT JOIN xxx_dept bd ON bd.id = bu.dept_id&nbsp;<br>WHERE bu.is_deleted = 0<br>limit 0, 100;<br><br>实则不然，两者的执行效率相差无几<br><br><br><br>当然，从脚本功能上来讲确实是没有必要在外层重新包装select t.* from的壳，还是做了这一步优化。<br><br>那么，问题来了，单独执行脚本基本都在100ms以内，而接口总体响应时间却在10s钟左右。那问题很有可能出在业务逻辑上了，接口程序查询完sql之后又做了其它的业务逻辑处理了？我得到的答复是：没有！！！<br><br>不放心的我找到程序位置又确认了一遍，确实没有：<br><br><br><br>这就有点意思了，按照惯例还是需要跟进mp的baseMapper底层实现去分析原因了。起初也曾怀疑过是mp查询结果反序列化耗时引起的，但并没有查到相关的配置参数，而且测试了单页查询10条和500条数据响应时间差不多就打消了这个疑虑，还是查源码吧！<br><br>&nbsp;<br><br>问题分析：一路漫长的调试源码（N多层嵌套），最终找到了mp的分页拦截器定义类PaginationInterceptor的intercept方法，<br><br><br><br>而sqlInfo.getSql()自动组装的查询总数据量的sql脚本竟然是：<br><br>SELECT COUNT(1)&nbsp;<br>FROM xxx_user bu&nbsp;<br>LEFT JOIN xc_company_user cu ON cu.user_id = bu.user_id&nbsp;<br>LEFT JOIN xc_company cp ON cp.id = cu.company_id&nbsp;<br>LEFT JOIN xxx_dept bd ON bd.id = bu.dept_id&nbsp;<br>WHERE bu.is_deleted = 0;<br><br>执行了一遍竟然耗时7s钟！！！总算找到问题所在了，那么为何做一个count计算会这么慢呢，原因在于下面的几个left join，对于这种count而言，做这些left join纯粹是浪费时间啊！无语，mp的分页查询只是机械化的将原始sql脚本的select做了替换：<br><br><br><br>去掉这些无用的left join表之后，查询单表count耗时500ms。<br><br>&nbsp;<br><br>解决方案： 目前只是确定了left join对count的性能损耗冗余量较大，针对该响应慢的接口做了如下优化，不使用mp自带的分页查询，单独写分页及count查询，而且优化查询方式，不再对每一页查询都做数据总量的查询，而是只在第一页和最后一页时重新查一遍总量，以节省系统开销。<br><br><br><br>优化后的接口响应时间第一页在600ms左右，第二页在100-200ms之间。<br><br>这只是目前临时的优化方案，对mp机械化的分页方式还是需要区分业务场景<br>————————————————<br>版权声明：本文为CSDN博主「淡泊明志-宁静致远」的原创文章，遵循CC 4.0 BY-SA版权协议，转载请附上原文出处链接及本声明。<br>原文链接：https://blog.csdn.net/yinianshen/article/details/115949701</h1>
]]>
</description>
<link>https://ameblo.jp/iameyamasky/entry-12788373786.html</link>
<pubDate>Thu, 09 Feb 2023 01:15:56 +0900</pubDate>
</item>
<item>
<title>批处理 rewriteBatchedStatements=true</title>
<description>
<![CDATA[ <p>最近在优化大批量数据插入的性能问题。<br>项目原来使用的大批量数据插入方法是Mybatis的foreach拼接SQL的方法。<br>我发现不管改成Mybatis Batch提交或者原生JDBC Batch的方法都不起作用，实际上在插入的时候仍然是一条条记录的插，速度远不如原来Mybatis的foreach拼接SQL的方法。这对于常理来说是非常不科学的。<br><br>下面先罗列一下三种插入方式：<br><br>public class NotifyRecordDaoTest extends BaseTest {<br><br>&nbsp; &nbsp; @Resource(name = "masterDataSource")<br>&nbsp; &nbsp; private DataSource dataSource;<br><br><br>&nbsp; &nbsp; @Test<br>&nbsp; &nbsp; public void insert() throws Exception {<br><br>&nbsp; &nbsp; &nbsp; &nbsp; Connection connection = dataSource.getConnection();<br>&nbsp; &nbsp; &nbsp; &nbsp; connection.setAutoCommit(false);<br>&nbsp; &nbsp; &nbsp; &nbsp; String sql = "insert into notify_record(" +<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; " &nbsp; &nbsp; &nbsp; &nbsp;partner_no," +<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; " &nbsp; &nbsp; &nbsp; &nbsp;trade_no, loan_no, notify_times," +<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; " &nbsp; &nbsp; &nbsp; &nbsp;limit_notify_times, notify_url, notify_type,notify_content," +<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; " &nbsp; &nbsp; &nbsp; &nbsp;notify_status)" +<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; " &nbsp; &nbsp; &nbsp; &nbsp;values(?,?,?,?,?,?,?,?,?) ";<br><br>&nbsp; &nbsp; &nbsp; &nbsp; PreparedStatement statement = connection.prepareStatement(sql);<br><br>&nbsp; &nbsp; &nbsp; &nbsp; for (int i = 0; i &lt; 10000; i++) {<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; statement.setString(1, "1");<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; statement.setString(2, i + "");<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; statement.setInt(3, 1);<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; statement.setInt(4, 1);<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; statement.setString(5, "1");<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; statement.setString(6, "1");<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; statement.setString(7, "1");<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; statement.setString(8, "1");<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; statement.setString(9, "1");<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; statement.addBatch();<br>&nbsp; &nbsp; &nbsp; &nbsp; }<br>&nbsp; &nbsp; &nbsp; &nbsp; long start = System.currentTimeMillis();<br><br>&nbsp; &nbsp; &nbsp; &nbsp; statement.executeBatch();<br>&nbsp; &nbsp; &nbsp; &nbsp; connection.commit();<br>&nbsp; &nbsp; &nbsp; &nbsp; connection.close();<br>&nbsp; &nbsp; &nbsp; &nbsp; statement.close();<br>&nbsp; &nbsp; &nbsp; &nbsp; System.out.println(System.currentTimeMillis() - start);<br><br><br>&nbsp; &nbsp; }<br><br>&nbsp; &nbsp; @Test<br>&nbsp; &nbsp; public void insertB() {<br><br>&nbsp; &nbsp; &nbsp; &nbsp; List&lt;NotifyRecordEntity&gt; notifyRecordEntityList = Lists.newArrayList();<br>&nbsp; &nbsp; &nbsp; &nbsp; for (int i = 0; i &lt; 10000; i++) {<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; NotifyRecordEntity record = new NotifyRecordEntity();<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; record.setLastNotifyTime(new Date());<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; record.setPartnerNo("1");<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; record.setLimitNotifyTimes(1);<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; record.setNotifyUrl("1");<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; record.setLoanNo("1");<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; record.setNotifyContent("1");<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; record.setTradeNo("" + i);<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; record.setNotifyTimes(1);<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; record.setNotifyType(EnumNotifyType.DAIFU);<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; record.setNotifyStatus(EnumNotifyStatus.FAIL);<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; notifyRecordEntityList.add(record);<br>&nbsp; &nbsp; &nbsp; &nbsp; }<br>&nbsp; &nbsp; &nbsp; &nbsp; long start = System.currentTimeMillis();<br>&nbsp; &nbsp; &nbsp; &nbsp; Map&lt;String, Object&gt; params = Maps.newHashMap();<br>&nbsp; &nbsp; &nbsp; &nbsp; params.put("notifyRecordEntityList", notifyRecordEntityList);<br>&nbsp; &nbsp; &nbsp; &nbsp; DaoFactory.notifyRecordDao.insertSelectiveList(params);<br>&nbsp; &nbsp; &nbsp; &nbsp; System.out.println(System.currentTimeMillis() - start);<br><br>&nbsp; &nbsp; }<br><br><br>&nbsp; &nbsp; @Resource<br>&nbsp; &nbsp; SqlSessionFactory sqlSessionFactory;<br><br>&nbsp; &nbsp; @Test<br>&nbsp; &nbsp; public void insertC() {<br><br>&nbsp; &nbsp; &nbsp; &nbsp; SqlSession sqlsession = sqlSessionFactory.openSession(ExecutorType.BATCH, false);<br>&nbsp; &nbsp; &nbsp; &nbsp; NotifyRecordDao notifyRecordDao = sqlsession.getMapper(NotifyRecordDao.class);<br>&nbsp; &nbsp; &nbsp; &nbsp; int num = 0;<br><br>&nbsp; &nbsp; &nbsp; &nbsp; for (int i = 0; i &lt; 10000; i++) {<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; NotifyRecordEntity record = new NotifyRecordEntity();<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; record.setLastNotifyTime(new Date());<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; record.setPartnerNo("1");<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; record.setLimitNotifyTimes(1);<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; record.setNotifyUrl("1");<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; record.setLoanNo("1");<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; record.setNotifyContent("1");<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; record.setTradeNo("s" + i);<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; record.setNotifyTimes(1);<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; record.setNotifyType(EnumNotifyType.DAIFU);<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; record.setNotifyStatus(EnumNotifyStatus.FAIL);<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; notifyRecordDao.insert(record);<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; num++;<br>// &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;if(num&gt;=1000){<br>// &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;sqlsession.commit();<br>// &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;sqlsession.clearCache();<br>// &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;num=0;<br>// &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;}<br>&nbsp; &nbsp; &nbsp; &nbsp; }<br>&nbsp; &nbsp; &nbsp; &nbsp; long start = System.currentTimeMillis();<br>&nbsp; &nbsp; &nbsp; &nbsp; sqlsession.commit();<br>&nbsp; &nbsp; &nbsp; &nbsp; sqlsession.clearCache();<br>&nbsp; &nbsp; &nbsp; &nbsp; sqlsession.close();<br>&nbsp; &nbsp; &nbsp; &nbsp; System.out.println(System.currentTimeMillis() - start);<br><br><br>&nbsp; &nbsp; }<br>}<br>1<br>2<br>3<br>4<br>5<br>6<br>7<br>8<br>9<br>10<br>11<br>12<br>13<br>14<br>15<br>16<br>17<br>18<br>19<br>20<br>21<br>22<br>23<br>24<br>25<br>26<br>27<br>28<br>29<br>30<br>31<br>32<br>33<br>34<br>35<br>36<br>37<br>38<br>39<br>40<br>41<br>42<br>43<br>44<br>45<br>46<br>47<br>48<br>49<br>50<br>51<br>52<br>53<br>54<br>55<br>56<br>57<br>58<br>59<br>60<br>61<br>62<br>63<br>64<br>65<br>66<br>67<br>68<br>69<br>70<br>71<br>72<br>73<br>74<br>75<br>76<br>77<br>78<br>79<br>80<br>81<br>82<br>83<br>84<br>85<br>86<br>87<br>88<br>89<br>90<br>91<br>92<br>93<br>94<br>95<br>96<br>97<br>98<br>99<br>100<br>101<br>102<br>103<br>104<br>105<br>106<br>107<br>108<br>109<br>测试插入一万条数据的发现除了拼接SQL的方式需要用5秒多的时间外，Mybatis Batch和原生JDBC Batch都需要50多秒，怎么想都觉得不可能，写法没有问题一定是数据库或者数据库连接配置上有问题。<br><br>后来才发现要批量执行的话，JDBC连接URL字符串中需要新增一个参数：rewriteBatchedStatements=true<br><br>master.jdbc.url=jdbc:mysql://112.126.84.3:3306/outreach_platform?useUnicode=true&amp;characterEncoding=utf8&amp;allowMultiQueries=true&amp;rewriteBatchedStatements=true<br><br>关于rewriteBatchedStatements这个参数介绍：<br><br>MySQL的JDBC连接的url中要加rewriteBatchedStatements参数，并保证5.1.13以上版本的驱动，才能实现高性能的批量插入。<br>MySQL JDBC驱动在默认情况下会无视executeBatch()语句，把我们期望批量执行的一组sql语句拆散，一条一条地发给MySQL数据库，批量插入实际上是单条插入，直接造成较低的性能。<br>只有把rewriteBatchedStatements参数置为true, 驱动才会帮你批量执行SQL<br>另外这个选项对INSERT/UPDATE/DELETE都有效<br><br>添加rewriteBatchedStatements=true这个参数后的执行速度比较：<br>同个表插入一万条数据时间近似值：<br>JDBC BATCH 1.1秒左右 &gt; Mybatis BATCH 2.2秒左右 &gt; 拼接SQL 4.5秒左右<br>————————————————<br>&nbsp;</p>
]]>
</description>
<link>https://ameblo.jp/iameyamasky/entry-12788373677.html</link>
<pubDate>Thu, 09 Feb 2023 01:13:55 +0900</pubDate>
</item>
<item>
<title>Mybatis-Plus批量插入数据太慢，使用rewriteBatchedStatements属</title>
<description>
<![CDATA[ <p>rewriteBatchedStatements神秘属性<br>前言<br>一、rewriteBatchedStatements参数<br>二、批量添加员工信息<br>1.普通saveBatch批量插入<br>2.设置rewriteBatchedStatements=true批量插入<br>总结<br>前言<br>最近小编手上一堆项目，实在特别忙，每天一堆批量操作，更新、导入、新增、删除，公司使用的Mybatis-Plus操作SQL，用过Mybatis-Plus的小伙伴一定知道他有很多API提供给我们使用，真爽，再不用写那么多繁琐的SQL语句，saveBatch是Plus的批量插入函数，大家平时工作肯定都用过，下面我们就来一个案例进入今天的主题。<br><br>一、rewriteBatchedStatements参数<br>MySQL的JDBC连接的url中要加rewriteBatchedStatements参数，并保证5.1.13以上版本的驱动，才能实现高性能的批量插入。MySQL JDBC驱动在默认情况下会无视executeBatch()语句，把我们期望批量执行的一组sql语句拆散，一条一条地发给MySQL数据库，批量插入实际上是单条插入，直接造成较低的性能。只有把rewriteBatchedStatements参数置为true, 驱动才会帮你批量执行SQL，另外这个选项对INSERT/UPDATE/DELETE都有效<br><br>添加rewriteBatchedStatements=true这个参数后的执行速度比较：<br><br>二、批量添加员工信息<br>1.普通saveBatch批量插入<br>我们循环1万次，把每个实例员工对象装到员工集合（List）中,然后调用Mybatis-Plus的saveBatch方法，传入List集合，实现批量员工的插入，然后我们在方法开始结束的地方，计算当前函数执行时长。<br><br>@PostMapping("/addBath")<br>@ResponseBody<br>public CommonResult&lt;Employee&gt; addBath(){<br>&nbsp; &nbsp; long startTime = System.currentTimeMillis();<br>&nbsp; &nbsp; List&lt;Employee&gt; list = new ArrayList&lt;&gt;();<br>&nbsp; &nbsp; // 循环批量添加1万条员工数据<br>&nbsp; &nbsp; for (int i = 0; i &lt; 10000; i++) {<br>&nbsp; &nbsp; &nbsp; &nbsp; Employee employee = new Employee();<br>&nbsp; &nbsp; &nbsp; &nbsp; employee.setName("DT测试"+i);<br>&nbsp; &nbsp; &nbsp; &nbsp; employee.setAge(20);<br>&nbsp; &nbsp; &nbsp; &nbsp; employee.setSalary(9000D);<br>&nbsp; &nbsp; &nbsp; &nbsp; employee.setDepartmentId(i);<br>&nbsp; &nbsp; &nbsp; &nbsp; list.add(employee);<br>&nbsp; &nbsp; }<br>&nbsp; &nbsp; boolean batch = employeeService.saveBatch(list);<br>&nbsp; &nbsp; if(batch){<br>&nbsp; &nbsp; &nbsp; &nbsp; long endTime = System.currentTimeMillis();<br>&nbsp; &nbsp; &nbsp; &nbsp; System.out.println("函数执行时间：" + (endTime - startTime) + "ms");<br>&nbsp; &nbsp; &nbsp; &nbsp; return CommonResult.success();<br>&nbsp; &nbsp; }<br>&nbsp; &nbsp; return CommonResult.error();<br>}<br><br>1<br>2<br>3<br>4<br>5<br>6<br>7<br>8<br>9<br>10<br>11<br>12<br>13<br>14<br>15<br>16<br>17<br>18<br>19<br>20<br>21<br>22<br><br>为了测试的细致，我多点了几下这个方法，下面是每次记录的时长：<br><br>批量添加1万条员工数据，测试结果如下：<br><br>第一次：（2秒多）<br><br>第二次：（接近2秒）<br><br>第三次：（接近2秒）<br><br>差不多添加1万条数据在2秒左右，这个时候我们加大量10万条，再测试：<br><br>批量添加10万条员工数据，测试结果如下：<br><br>第一次：（19.341 秒）<br><br>第二次：（18.298 秒）<br><br>顿时我傻了，10万条数据批量添加要20秒左右，这要是再加个10万条，那不崩掉，于是我就各种找解决方案，最后锁定一个数据库连接的属性rewriteBatchedStatements，下面我们就添加上该属性试试速度与激情。<br><br>2.设置rewriteBatchedStatements=true批量插入<br>下面我们为数据库的连接加上rewriteBatchedStatements=true的属性，再测试批量加入的耗时。<br><br>rewriteBatchedStatements=true<br>1<br><br>批量添加1万条员工数据，测试结果如下：<br><br>质的飞跃啊！牛逼，可以看出批处理的速度还是非常给力的。<br><br>1万条数据：2s --&gt;&gt;&gt; 0.5s<br><br>批量添加10万条员工数据，测试结果如下：<br><br>效果惊呆了吧？？？直接起飞啊。<br><br>1万条数据：20s --&gt;&gt;&gt; 5s<br><br>总结<br>所以，如果你想验证rewriteBatchedStatements在你的系统里是否已经生效，记得要使用较大的batch，以上就是我的这次总结了，如果有更好的，或者更专业的记得留下你的指教呀～<br>&nbsp;</p>
]]>
</description>
<link>https://ameblo.jp/iameyamasky/entry-12788373574.html</link>
<pubDate>Thu, 09 Feb 2023 01:12:10 +0900</pubDate>
</item>
<item>
<title>MyBatis-plus批量写入数据方法saveBatch速度很慢原因排查</title>
<description>
<![CDATA[ <h1 id="articleContentId">问题场景：<br>使用MyBatis-plus的saveBatch方法执行数据批量insert<br>问题描述：<br>&nbsp; &nbsp; /**<br>&nbsp; &nbsp; &nbsp;* 批量添加设备<br>&nbsp; &nbsp; &nbsp;* @param deviceList<br>&nbsp; &nbsp; &nbsp;* @param applicationName<br>&nbsp; &nbsp; &nbsp;* @return<br>&nbsp; &nbsp; &nbsp;*/<br>&nbsp; &nbsp; public boolean saveBatchDevice(List&lt;Device&gt; deviceList, String applicationName) {<br>&nbsp; &nbsp; &nbsp; &nbsp; if (CollectionUtils.isEmpty(deviceList)) {<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; return false;<br>&nbsp; &nbsp; &nbsp; &nbsp; }<br>&nbsp; &nbsp; &nbsp; &nbsp; boolean result = saveBatch(deviceList);<br>&nbsp; &nbsp; &nbsp; &nbsp; if (result) {<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; List&lt;Map&gt; mapList = deviceList.stream().map((Function&lt;Device, Map&gt;) device -&gt; {<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Map&lt;String, String&gt; item = new HashMap&lt;&gt;(5);<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; item.put("imei", device.getImei());<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; item.put("supplier", device.getSupplier());<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; item.put("model", device.getModel());<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; return item;<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; }).collect(Collectors.toList());<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; kafkaTemplate.send(String.format(ADD_DEVICE_TOPIC, applicationName), JSONUtil.toStr(mapList));<br>&nbsp; &nbsp; &nbsp; &nbsp; }<br>&nbsp; &nbsp; &nbsp; &nbsp; return result;<br>&nbsp; &nbsp; }<br>1<br>2<br>3<br>4<br>5<br>6<br>7<br>8<br>9<br>10<br>11<br>12<br>13<br>14<br>15<br>16<br>17<br>18<br>19<br>20<br>21<br>22<br>23<br>调用该方法对传入的设备信息列表进行批量保存，但是发现速度非常慢，传入1000条数据保存用了大概1分钟，通过使用StopWatch记录各个流程的耗时，发现所有耗时全部在saveBatch上，这是该Service继承MyBatis-plus的ServiceImpl类后，获得的批量保存数据的方法。<br>原因分析：<br>查看该saveBatch源码实现<br>&nbsp; &nbsp; /**<br>&nbsp; &nbsp; &nbsp;* 执行批量操作<br>&nbsp; &nbsp; &nbsp;*<br>&nbsp; &nbsp; &nbsp;* @param entityClass 实体类<br>&nbsp; &nbsp; &nbsp;* @param log &nbsp; &nbsp; &nbsp; &nbsp; 日志对象<br>&nbsp; &nbsp; &nbsp;* @param list &nbsp; &nbsp; &nbsp; &nbsp;数据集合<br>&nbsp; &nbsp; &nbsp;* @param batchSize &nbsp; 批次大小<br>&nbsp; &nbsp; &nbsp;* @param consumer &nbsp; &nbsp;consumer<br>&nbsp; &nbsp; &nbsp;* @param &lt;E&gt; &nbsp; &nbsp; &nbsp; &nbsp; T<br>&nbsp; &nbsp; &nbsp;* @return 操作结果<br>&nbsp; &nbsp; &nbsp;* @since 3.4.0<br>&nbsp; &nbsp; &nbsp;*/<br>&nbsp; &nbsp; public static &lt;E&gt; boolean executeBatch(Class&lt;?&gt; entityClass, Log log, Collection&lt;E&gt; list, int batchSize, BiConsumer&lt;SqlSession, E&gt; consumer) {<br>&nbsp; &nbsp; &nbsp; &nbsp; Assert.isFalse(batchSize &lt; 1, "batchSize must not be less than one");<br>&nbsp; &nbsp; &nbsp; &nbsp; return !CollectionUtils.isEmpty(list) &amp;&amp; executeBatch(entityClass, log, sqlSession -&gt; {<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; int size = list.size();<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; int i = 1;<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; for (E element : list) {<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; consumer.accept(sqlSession, element);<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; if ((i % batchSize == 0) || i == size) {<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; sqlSession.flushStatements();<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; }<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; i++;<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; }<br>&nbsp; &nbsp; &nbsp; &nbsp; });<br>&nbsp; &nbsp; }<br>1<br>2<br>3<br>4<br>5<br>6<br>7<br>8<br>9<br>10<br>11<br>12<br>13<br>14<br>15<br>16<br>17<br>18<br>19<br>20<br>21<br>22<br>23<br>24<br>25<br>26<br>将传入的实体List分为1000个一批，每个调用sqlSession.insert(sqlStatement, entity)，insert完一批做一次sqlSession.flushStatements()，看起来是没有问题，但是就是速度非常慢。查阅相关资料发现，要批量执行的话，JDBC连接URL字符串中需要新增一个参数：rewriteBatchedStatements=true<br>MySQL的JDBC连接的url中要加rewriteBatchedStatements参数，并保证5.1.13以上版本的驱动，才能实现高性能的批量插入。<br>MySQL JDBC驱动在默认情况下会无视executeBatch()语句，把我们期望批量执行的一组sql语句拆散，一条一条地发给MySQL数据库，批量插入实际上是单条插入，直接造成较低的性能。<br>只有把rewriteBatchedStatements参数置为true, 驱动才会帮你批量执行SQL<br>另外这个选项对INSERT/UPDATE/DELETE都有效<br>————————————————<br>版权声明：本文为CSDN博主「chengpei147」的原创文章，遵循CC 4.0 BY-SA版权协议，转载请附上原文出处链接及本声明。<br>原文链接：https://blog.csdn.net/chengpei147/article/details/114969606</h1>
]]>
</description>
<link>https://ameblo.jp/iameyamasky/entry-12788373497.html</link>
<pubDate>Thu, 09 Feb 2023 01:10:57 +0900</pubDate>
</item>
<item>
<title>MyBatis-plus批量写入数据方法saveBatch速度很慢原因排查</title>
<description>
<![CDATA[ <h1>&nbsp;</h1><p><strong>批量执行的话，JDBC连接URL字符串中需要新增一个参数：rewriteBatchedStatements=true</strong></p><p>https://blog.csdn.net/chengpei147/article/details/114969606</p><p>https://blog.csdn.net/qq_34283987/article/details/107694587</p><p>此博客只是为了记忆相关知识点，大部分为网络上的文章，在此向各个文章的作者表示感谢！</p>
]]>
</description>
<link>https://ameblo.jp/iameyamasky/entry-12788373442.html</link>
<pubDate>Thu, 09 Feb 2023 01:09:42 +0900</pubDate>
</item>
</channel>
</rss>
