终身最爱完整版 ms.bookben.comm

您所在位置: &
&nbsp&&nbsp&nbsp&&nbsp
色彩课程单元化研究性学习对中职生自信心培养的行动研究.pdf 61页
本文档一共被下载:
次 ,您可全文免费在线阅读后下载本文档。
下载提示
1.本站不保证该用户上传的文档完整性,不预览、不比对内容而直接下载产生的反悔问题本站不予受理。
2.该文档所得收入(下载+内容+预览三)归上传者、原创者。
3.登录后可充值,立即自动返金币,充值渠道很便利
需要金币:150 &&
色彩课程单元化研究性学习对中职生自信心培养的行动研究.pdf
你可能关注的文档:
··········
··········
论文独创性声明
本论文是我个人在导师指导下进行的研究工作及取得的研究
成果。论文中除了特另tlDN以标注和致谢的地方外,不包含其他人
或机构已经发表或撰写过的研究成果。其他同志对本研究的启发
和所做的贡献均已在论文中做了明确的声明并表示了谢意。
作者签名:守鼙专氆日期:≯召.,,b.
论文使用授权声明
本人完全了解上海师范大学有关保留、使用学位论文的规定,
即:学校有权保留送交论文的复印件,允许论文被查阅和借阅;
学校可以公布论文的全部或部分内容,可以采用影印、缩印或其
它手段保存论文。保密的论文在解密后遵守此规定。
作者签名鼢燧名-7云确期:川.f,心
尊敬的专家:
您被聘为本篇论文的评议专家。请您在百忙之中对本篇论文进行评议,在收
到论文10个工作日内填写完本评议表及背面的评语,并将填写好的评议表交还
给您所在单位的研究生管理部门,或寄回送给您论文的上海师范大学相关学院。
衷心感谢您对我校研究生工作的大力支持!
上海师范大学研究生处
上海师范大学专业学位论文评议表
专业学位名称 教育硕士
您对论文内窑吸领黻慧穗度(打√):熟悉口
不熟悉f建议退回)口
是否指导过专业学位研究生(打√): 是口
来源予教育实际,内容其体,系所属学科领域鳓研究
选艇与文献综
范畴;文献资抖典型、新近、全蕊、确期;总结蛔纳
客观、准确。
发现新的问题.或体现作者鳃薪观点与新冕麓。
戍用性及沧文 具有实践价值或应用价值;对策或建议募宥明确的指导作
用:可产生一一定的社会效益。
基础知识与方 基础知识扎实:研究过程设计合理,方法援范:资料与数
据分析科学、准确。
表述煎洁、觏蓬,具有较强钧系统性与逻辑性:写作
论文规范性
规范,结构严谨,语畜流畅;引用文献真实、典型、
注l;评价等级分为挽秀、良好、合格、不合格瘿静。
缘舍:稃分
优秀:≥90;良好:8弘q5.合格;74—60;不合格:≤59。
综合得分为各零项指标酶加权平均。
注2;“异议”是指具有下列情况芝-’,
①侄r颂评价指标得分低于6D分,或论文总分低
正在加载中,请稍后...&figure&&img src=&/50/v2-fb8d34603daf193e5c86b0a_b.png& data-rawwidth=&510& data-rawheight=&320& class=&origin_image zh-lightbox-thumb& width=&510& data-original=&/50/v2-fb8d34603daf193e5c86b0a_r.png&&&/figure&&p&你在工作、学习中是否曾因信息过载叫苦不迭?有一种方法能够替你读海量文章,并将不同的主题和对应的关键词抽取出来,让你谈笑间观其大略。本文使用Python对超过1000条文本做主题抽取,一步步带你体会非监督机器学习LDA方法的魅力。想不想试试呢?&/p&&p&&br&&/p&&figure&&img src=&/v2-fb8d34603daf193e5c86b0a_b.png& data-rawwidth=&510& data-rawheight=&320& class=&origin_image zh-lightbox-thumb& width=&510& data-original=&/v2-fb8d34603daf193e5c86b0a_r.png&&&/figure&&p&&br&&/p&&h2&淹没&/h2&&p&每个现代人,几乎都体会过信息过载的痛苦。文章读不过来,音乐听不过来,视频看不过来。可是现实的压力,使你又不能轻易放弃掉。&/p&&p&假如你是个研究生,教科书和论文就是你不得不读的内容。现在有了各种其他的阅读渠道,微信、微博、得到App、多看阅读、豆瓣阅读、Kindle,还有你在RSS上订阅的一大堆博客……情况就变得更严重了。&/p&&p&因为对数据科学很感兴趣,你订阅了大量的数据科学类微信公众号。虽然你很勤奋,但你知道自己依然遗漏了很多文章。&/p&&p&学习了 &a href=&/?target=http%3A///c/3e& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&Python爬虫课&i class=&icon-external&&&/i&&/a& 以后,你决定尝试一下自己的屠龙之术。依仗着爬虫的威力,你打算采集到所有数据科学公众号文章。&/p&&p&你仔细分析了微信公众号文章的检索方式,制定了关键词列表。巧妙利用搜狗搜索引擎的特性,你编写了自己的爬虫,并且成功地于午夜放到了云端运行。&/p&&p&开心啊,激动啊……&/p&&p&第二天一早,天光刚亮,睡眠不足的你就兴冲冲地爬起来去看爬取结果。居然已经有了1000多条!你欣喜若狂,导出成为csv格式,存储到了本地机器,并且打开浏览。&/p&&p&&br&&/p&&figure&&img src=&/v2-3c404cd3d6fd98b846d8de30bc54df60_b.jpg& data-rawwidth=&1240& data-rawheight=&729& class=&origin_image zh-lightbox-thumb& width=&1240& data-original=&/v2-3c404cd3d6fd98b846d8de30bc54df60_r.jpg&&&/figure&&p&&br&&/p&&p&兴奋了10几分钟之后,你冷却了下来,给自己提出了2个重要的问题。&/p&&ul&&li&这些文章都值得读吗?&/li&&li&这些文章我读得过来吗?&/li&&/ul&&p&一篇数据科学类公众号,你平均需要5分钟阅读。这1000多篇……你拿出计算器认真算了一下。&/p&&p&&br&&/p&&figure&&img src=&/v2-10807fc52eaeb12a1da045_b.jpg& data-rawwidth=&688& data-rawheight=&864& class=&origin_image zh-lightbox-thumb& width=&688& data-original=&/v2-10807fc52eaeb12a1da045_r.jpg&&&/figure&&p&&br&&/p&&p&读完这一宿采集到的文章,你不眠不休的话,也需要85个小时。&/p&&p&在你阅读的这85个小时里面,许许多多的数据科学类公众号新文章还会源源不断涌现出来。&/p&&p&你感觉自己快被文本内容淹没了,根本透不过气……&/p&&p&学了这么长时间Python,你应该想到——我能否用自动化工具来分析它?&/p&&p&好消息,答案是可以的。&/p&&p&但是用什么样的工具呢?&/p&&p&翻了翻你自己的武器库,你发现了&a href=&/?target=http%3A///p/e4b24a734ccc& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&词云&i class=&icon-external&&&/i&&/a&、&a href=&/?target=http%3A///p/d50a14541d01& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&情感分析&i class=&icon-external&&&/i&&/a&和&a href=&/?target=http%3A///p/67a71e366516& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&决策树&i class=&icon-external&&&/i&&/a&。&/p&&p&然而,在帮你应对信息过载这件事儿上,上述武器好像都不大合适。&/p&&p&词云你打算做几个?全部文章只做一个的话,就会把所有文章的内容混杂起来,没有意义——因为你知道这些文章谈的就是数据科学啊!如果每一篇文章都分别做词云,1000多张图浏览起来,好像也没有什么益处。&/p&&p&你阅读数据科学类公众号文章是为了获得知识和技能,分析文字中蕴含的情感似乎于事无补。&/p&&p&决策树是可以用来做分类的,没错。可是它要求的输入信息是&b&结构化&/b&的&b&有&/b&标记数据,你手里握着的这一大堆文本,却刚好是&b&非结构化&/b&的&b&无&/b&标记数据。&/p&&p&全部武器都哑火了。&/p&&p&没关系。本文帮助你在数据科学武器库中放上一件新式兵器。它能够处理的,就是大批量的非结构无标记数据。在机器学习的分类里,它属于非监督学习(unsupervised machine learning)范畴。具体而言,我们需要用到的方法叫主题建模(topic model)或者主题抽取(topic extraction)。&/p&&h2&主题&/h2&&p&既然要建模,我们就需要弄明白建立什么样的模型。&/p&&p&根据维基百科的定义,主题模型是指:&/p&&blockquote&在机器学习和自然语言处理等领域是用来在一系列文档中发现抽象主题的一种统计模型。&/blockquote&&p&这个定义本身好像就有点儿抽象,咱们举个例子吧。&/p&&p&还是维基百科上,对一条可爱的小狗有这样一段叙述。&/p&&blockquote&阿博(Bo;日-) 是美国第44任总统巴拉克·奥巴马的宠物狗,也是奥巴马家族的成员之一。阿博是一只已阉割的雄性黑色长毛葡萄牙水犬。奥巴马一家本来没有养狗,因为他的大女儿玛丽亚对狗过敏。但为了延续白宫主人历年均有养狗的传统,第一家庭在入主白宫后,花了多个月去观察各种犬种,并特地选择了葡萄牙水犬这一种掉毛少的低敏狗。&/blockquote&&p&我们来看看这条可爱的小狗照片:&/p&&p&&br&&/p&&figure&&img src=&/v2-387bb5c4132_b.png& data-rawwidth=&500& data-rawheight=&750& class=&origin_image zh-lightbox-thumb& width=&500& data-original=&/v2-387bb5c4132_r.png&&&/figure&&p&&br&&/p&&p&问题来了,这篇文章的主题(topic)是什么?&/p&&p&你可能脱口而出,“狗啊!”&/p&&p&且慢,换个问法。假设一个用户读了这篇文章,很感兴趣。你想推荐更多他可能感兴趣的文章给他,以下2段文字,哪个选项更合适呢?&/p&&p&选项1:&/p&&blockquote&阿富汗猎狗(Afghan Hound)是一种猎犬,也是最古老的狗品种。阿富汗猎狗外表厚实,细腻,柔滑,它的尾巴在最后一环卷曲。阿富汗猎狗生存于伊朗,阿富汗东部的寒冷山上,阿富汗猎狗最初是用来狩猎野兔和瞪羚。阿富汗猎狗其他名称包含巴尔赫塔子库奇猎犬,猎犬,俾路支猎犬,喀布尔猎犬,或非洲猎犬。&/blockquote&&p&选项2:&/p&&blockquote&1989年夏天,奥巴马在西德利·奥斯汀律师事务所担任暑期工读生期间,结识当时已是律师的米歇尔·鲁滨逊。两人于1992年结婚,现有两个女儿——大女儿玛丽亚在1999年于芝加哥芝加哥大学医疗中心出生,而小女儿萨沙在2001年于芝加哥大学医疗中心出生。&/blockquote&&p&给你30秒,思考一下。&/p&&p&你的答案是什么?&/p&&p&我的答案是——不确定。&/p&&p&人类天生喜欢把复杂问题简单化。我们恨不得把所有东西划分成具体的、互不干扰的分类,就如同药铺的一个个抽屉一样。然后需要的时候,从对应的抽屉里面取东西就可以了。&/p&&p&&br&&/p&&figure&&img src=&/v2-c331e0dd361c01dc0e3c_b.png& data-rawwidth=&960& data-rawheight=&720& class=&origin_image zh-lightbox-thumb& width=&960& data-original=&/v2-c331e0dd361c01dc0e3c_r.png&&&/figure&&p&&br&&/p&&p&这就像是职业。从前我们说“三百六十行”。随便拿出某个人来,我们就把他归入其中某一行。&/p&&p&现在不行了,反例就是所谓的“&a href=&/question/& class=&internal&&斜杠青年&/a&”。&/p&&p&主题这个事情,也同样不那么泾渭分明。介绍小狗Bo的文章虽然不长,但是任何单一主题都无法完全涵盖它。&/p&&p&如果用户是因为对小狗的喜爱,阅读了这篇文章,那么显然你给他推荐选项1会更理想;但是如果用户关注的是奥巴马的家庭,那么比起选项2来,选项1就显得不是那么合适了。&/p&&p&我们必须放弃用一个词来描述主题的尝试,转而用一系列关键词来刻画某个主题(例如“奥巴马”+“宠物“+”狗“+”第一家庭“)。&/p&&p&在这种模式下,以下的选项3可能会脱颖而出:&/p&&blockquote&据英国《每日邮报》报道,美国一名男子近日试图绑架总统奥巴马夫妇的宠物狗博(Bo),不惜由二千多公里远的北达科他州驱车往华盛顿,但因为走漏风声,被特勤局人员逮捕。奥巴马夫妇目前养有博和阳光(Sunny)两只葡萄牙水犬。&/blockquote&&p&讲到这里,你大概弄明白了主题抽取的目标了。可是面对浩如烟海的文章,我们怎么能够把相似的文章聚合起来,并且提取描述聚合后主题的重要关键词呢?&/p&&p&主题抽取有若干方法。目前最为流行的叫做隐含狄利克雷分布(Latent Dirichlet allocation),简称LDA。&/p&&p&LDA相关原理部分,置于本文最后。下面我们先用Python来尝试实践一次主题抽取。如果你对原理感兴趣,不妨再做延伸阅读。&/p&&h2&准备&/h2&&p&准备工作的第一步,还是先安装Anaconda套装。详细的流程步骤请参考《 &a href=&/?target=http%3A///p/e4b24a734ccc& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&如何用Python做词云&i class=&icon-external&&&/i&&/a& 》一文。&/p&&p&从微信公众平台爬来的datascience.csv文件,请从 &a href=&/?target=https%3A//s3-us-west-/notion-static/5a4cb4b906/datascience.csv& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&这里&i class=&icon-external&&&/i&&/a& 下载。你可以用Excel打开,看看下载是否完整和正确。&/p&&p&&br&&/p&&figure&&img src=&/v2-3c404cd3d6fd98b846d8de30bc54df60_b.jpg& data-rawwidth=&1240& data-rawheight=&729& class=&origin_image zh-lightbox-thumb& width=&1240& data-original=&/v2-3c404cd3d6fd98b846d8de30bc54df60_r.jpg&&&/figure&&p&&br&&/p&&p&如果一切正常,请将该csv文件移动到咱们的工作目录demo下。&/p&&p&到你的系统“终端”(macOS, Linux)或者“命令提示符”(Windows)下,进入我们的工作目录demo,执行以下命令。&/p&&div class=&highlight&&&pre&&code class=&language-text&&&span&&/span&pip install jieba
pip install pyldavis
&/code&&/pre&&/div&&p&运行环境配置完毕。&/p&&p&在终端或者命令提示符下键入:&/p&&div class=&highlight&&&pre&&code class=&language-text&&&span&&/span&jupyter notebook
&/code&&/pre&&/div&&p&&br&&/p&&figure&&img src=&/v2-1cd3519b6ecaf506b2b7_b.jpg& data-rawwidth=&1240& data-rawheight=&757& class=&origin_image zh-lightbox-thumb& width=&1240& data-original=&/v2-1cd3519b6ecaf506b2b7_r.jpg&&&/figure&&p&&br&&/p&&p&Jupyter Notebook已经正确运行。下面我们就可以正式编写代码了。&/p&&h2&代码&/h2&&p&我们在Jupyter Notebook中新建一个Python 2笔记本,起名为topic-model。&/p&&p&&br&&/p&&figure&&img src=&/v2-c81b228c120de627c93695_b.jpg& data-rawwidth=&1240& data-rawheight=&758& class=&origin_image zh-lightbox-thumb& width=&1240& data-original=&/v2-c81b228c120de627c93695_r.jpg&&&/figure&&p&&br&&/p&&p&为了处理表格数据,我们依然使用数据框工具Pandas。先调用它。&/p&&div class=&highlight&&&pre&&code class=&language-text&&&span&&/span&import pandas as pd
&/code&&/pre&&/div&&p&然后读入我们的数据文件datascience.csv,注意它的编码是中文GB18030,不是Pandas默认设置的编码,所以此处需要显式指定编码类型,以免出现乱码错误。&/p&&div class=&highlight&&&pre&&code class=&language-text&&&span&&/span&df = pd.read_csv(&datascience.csv&, encoding='gb18030')
&/code&&/pre&&/div&&p&我们来看看数据框的头几行,以确认读取是否正确。&/p&&div class=&highlight&&&pre&&code class=&language-text&&&span&&/span&df.head()
&/code&&/pre&&/div&&p&显示结果如下:&/p&&p&&br&&/p&&figure&&img src=&/v2-5e87e9bf648b706f3e5eaff268a8a4a4_b.jpg& data-rawwidth=&1240& data-rawheight=&242& class=&origin_image zh-lightbox-thumb& width=&1240& data-original=&/v2-5e87e9bf648b706f3e5eaff268a8a4a4_r.jpg&&&/figure&&p&&br&&/p&&p&没问题,头几行内容所有列都正确读入,文字显式正常。我们看看数据框的长度,以确认数据是否读取完整。&/p&&div class=&highlight&&&pre&&code class=&language-text&&&span&&/span&df.shape
&/code&&/pre&&/div&&p&执行的结果为:&/p&&div class=&highlight&&&pre&&code class=&language-text&&&span&&/span&(1024, 3)
&/code&&/pre&&/div&&p&行列数都与我们爬取到的数量一致,通过。&/p&&p&下面我们需要做一件重要工作——分词。这是因为我们需要提取每篇文章的关键词。而中文本身并不使用空格在单词间划分。此处我们采用“结巴分词”工具。这一工具的具体介绍和其他用途请参见《&a href=&/?target=http%3A///p/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&如何用Python做中文分词?&i class=&icon-external&&&/i&&/a&》一文。&/p&&p&我们首先调用jieba分词包。&/p&&div class=&highlight&&&pre&&code class=&language-text&&&span&&/span&import jieba
&/code&&/pre&&/div&&p&我们此次需要处理的,不是单一文本数据,而是1000多条文本数据,因此我们需要把这项工作并行化。这就需要首先编写一个函数,处理单一文本的分词。&/p&&div class=&highlight&&&pre&&code class=&language-text&&&span&&/span&def chinese_word_cut(mytext):
return & &.join(jieba.cut(mytext))
&/code&&/pre&&/div&&p&有了这个函数之后,我们就可以不断调用它来批量处理数据框里面的全部文本(正文)信息了。你当然可以自己写个循环来做这项工作。但这里我们使用更为高效的apply函数。如果你对这个函数有兴趣,可以点击&a href=&/?target=https%3A///watch%3Fv%3DP_q0tkYqvSk& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&这段教学视频&i class=&icon-external&&&/i&&/a&查看具体的介绍。&/p&&p&下面这一段代码执行起来,可能需要一小段时间。请耐心等候。&/p&&div class=&highlight&&&pre&&code class=&language-text&&&span&&/span&df[&content_cutted&] = df.content.apply(chinese_word_cut)
&/code&&/pre&&/div&&p&执行过程中可能会出现如下提示。没关系,忽略就好。&/p&&div class=&highlight&&&pre&&code class=&language-text&&&span&&/span&Building prefix dict from the default dictionary ...
Loading model from cache /var/folders/8s/k8yr4zy52q1dh107gjx280mw0000gn/T/jieba.cache
Loading model cost 0.406 seconds.
Prefix dict has been built succesfully.
&/code&&/pre&&/div&&p&执行完毕之后,我们需要查看一下,文本是否已经被正确分词。&/p&&div class=&highlight&&&pre&&code class=&language-text&&&span&&/span&df.content_cutted.head()
&/code&&/pre&&/div&&p&结果如下:&/p&&div class=&highlight&&&pre&&code class=&language-text&&&span&&/span&0
大 数据 产业 发展 受到 国家 重视 , 而 大 数据 已经 上升 为 国家 战略 , 未...
点击 上方 “ 硅谷 周边 ” 关注 我 , 收到 最新 的 文章 哦 ! 昨天 , Goo...
国务院 总理 李克强 当地 时间 20 日 上午 在 纽约 下榻 饭店 同 美国 经济 、 ...
2016 年 , 全峰 集团 持续 挖掘 大 数据 、 云 计算 、 “ 互联网 + ” 等...
贵州 理工学院 召开 大 数据分析 与 应用 专题 分享 会
借 “ 创响 中国 ” 贵...
Name: content_cutted, dtype: object
&/code&&/pre&&/div&&p&单词之间都已经被空格区分开了。下面我们需要做一项重要工作,叫做文本的向量化。&/p&&p&不要被这个名称吓跑。它的意思其实很简单。因为计算机不但不认识中文,甚至连英文也不认识,它只认得数字。我们需要做的,是把文章中的关键词转换为一个个特征(列),然后对每一篇文章数关键词出现个数。&/p&&p&假如这里有两句话:&/p&&p&I love the game.&br&I hate the game.&/p&&p&那么我们就可以抽取出以下特征:&/p&&ul&&li&I&/li&&li&love&/li&&li&hate&/li&&li&the&/li&&li&game&/li&&/ul&&p&然后上面两句话就转换为以下表格:&/p&&p&&br&&/p&&figure&&img src=&/v2-50a4e0f8c73ec5bc6c7d57_b.jpg& data-rawwidth=&1240& data-rawheight=&188& class=&origin_image zh-lightbox-thumb& width=&1240& data-original=&/v2-50a4e0f8c73ec5bc6c7d57_r.jpg&&&/figure&&p&&br&&/p&&p&第一句表示为[1, 1, 0, 1, 1],第二句是[1, 0, 1, 1, 1]。这就叫向量化了。机器就能看懂它们了。&/p&&p&原理弄清楚了,让我们引入相关软件包吧。&/p&&div class=&highlight&&&pre&&code class=&language-text&&&span&&/span&from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
&/code&&/pre&&/div&&p&处理的文本都是微信公众号文章,里面可能会有大量的词汇。我们不希望处理所有词汇。因为一来处理时间太长,二来那些很不常用的词汇对我们的主题抽取意义不大。所以这里做了个限定,只从文本中提取1000个最重要的特征关键词,然后停止。&/p&&div class=&highlight&&&pre&&code class=&language-text&&&span&&/span&n_features = 1000
&/code&&/pre&&/div&&p&下面我们开始关键词提取和向量转换过程:&/p&&div class=&highlight&&&pre&&code class=&language-text&&&span&&/span&tf_vectorizer = CountVectorizer(strip_accents = 'unicode',
max_features=n_features,
stop_words='english',
max_df = 0.5,
min_df = 10)
tf = tf_vectorizer.fit_transform(df.content_cutted)
&/code&&/pre&&/div&&p&到这里,似乎什么都没有发生。因为我们没有要求程序做任何输出。下面我们就要放出LDA这个大招了。&/p&&p&先引入软件包:&/p&&div class=&highlight&&&pre&&code class=&language-text&&&span&&/span&from sklearn.decomposition import LatentDirichletAllocation
&/code&&/pre&&/div&&p&然后我们需要人为设定主题的数量。这个要求让很多人大跌眼镜——我怎么知道这一堆文章里面多少主题?!&/p&&p&别着急。应用LDA方法,指定(或者叫瞎猜)主题个数是必须的。如果你只需要把文章粗略划分成几个大类,就可以把数字设定小一些;相反,如果你希望能够识别出非常细分的主题,就增大主题个数。&/p&&p&对划分的结果,如果你觉得不够满意,可以通过继续迭代,调整主题数量来优化。&/p&&p&这里我们先设定为5个分类试试。&/p&&div class=&highlight&&&pre&&code class=&language-text&&&span&&/span&n_topics = 5
lda = LatentDirichletAllocation(n_topics=n_topics, max_iter=50,
learning_method='online',
learning_offset=50.,
random_state=0)
&/code&&/pre&&/div&&p&把我们的1000多篇向量化后的文章扔给LDA,让它欢快地找主题吧。&/p&&p&这一部分工作量较大,程序会执行一段时间,Jupyter Notebook在执行中可能暂时没有响应。等待一会儿就好,不要着急。&/p&&div class=&highlight&&&pre&&code class=&language-text&&&span&&/span&lda.fit(tf)
&/code&&/pre&&/div&&p&程序终于跑完了的时候,你会看到如下的提示信息:&/p&&div class=&highlight&&&pre&&code class=&language-text&&&span&&/span&LatentDirichletAllocation(batch_size=128, doc_topic_prior=None,
evaluate_every=-1, learning_decay=0.7,
learning_method='online', learning_offset=50.0,
max_doc_update_iter=100, max_iter=50, mean_change_tol=0.001,
n_jobs=1, n_topics=5, perp_tol=0.1, random_state=0,
topic_word_prior=None, total_samples=, verbose=0)
&/code&&/pre&&/div&&p&可是,这还是什么输出都没有啊。它究竟找了什么样的主题?&/p&&p&主题没有一个确定的名称,而是用一系列关键词刻画的。我们定义以下的函数,把每个主题里面的前若干个关键词显示出来:&/p&&div class=&highlight&&&pre&&code class=&language-text&&&span&&/span&def print_top_words(model, feature_names, n_top_words):
for topic_idx, topic in ponents_):
print(&Topic #%d:& % topic_idx)
print(& &.join([feature_names[i]
for i in topic.argsort()[:-n_top_words - 1:-1]]))
&/code&&/pre&&/div&&p&定义好函数之后,我们暂定每个主题输出前20个关键词。&/p&&div class=&highlight&&&pre&&code class=&language-text&&&span&&/span&n_top_words = 20
&/code&&/pre&&/div&&p&以下命令会帮助我们依次输出每个主题的关键词表:&/p&&div class=&highlight&&&pre&&code class=&language-text&&&span&&/span&tf_feature_names = tf_vectorizer.get_feature_names()
print_top_words(lda, tf_feature_names, n_top_words)
&/code&&/pre&&/div&&p&执行效果如下:&/p&&div class=&highlight&&&pre&&code class=&language-text&&&span&&/span&Topic #0:
学习 模型 使用 算法 方法 机器 可视化 神经网络 特征 处理 计算 系统 不同 数据库 训练 分类 基于 工具 一种 深度
这个 就是 可能 如果 他们 没有 自己 很多 什么 不是 但是 这样 因为 一些 时候 现在 用户 所以 非常 已经
企业 平台 服务 管理 互联网 公司 行业 数据分析 业务 用户 产品 金融 创新 客户 实现 系统 能力 产业 工作 价值
中国 2016 电子 增长 10 市场 城市 2015 关注 人口 检索 30 或者 其中 阅读 应当 美国 全国 同比 20
人工智能 学习 领域 智能 机器人 机器 人类 公司 深度 研究 未来 识别 已经 医疗 系统 计算机 目前 语音 百度 方面
&/code&&/pre&&/div&&p&在这5个主题里,可以看出主题0主要关注的是数据科学中的算法和技术,而主题4显然更注重数据科学的应用场景。&/p&&p&剩下的几个主题可以如何归纳?作为思考题,留给你花时间想一想吧。&/p&&p&到这里,LDA已经成功帮我们完成了主题抽取。但是我知道你不是很满意,因为结果不够直观。&/p&&p&那咱们就让它直观一些好了。&/p&&p&执行以下命令,会有有趣的事情发生。&/p&&div class=&highlight&&&pre&&code class=&language-text&&&span&&/span&import pyLDAvis
import pyLDAvis.sklearn
pyLDAvis.enable_notebook()
pyLDAvis.sklearn.prepare(lda, tf, tf_vectorizer)
&/code&&/pre&&/div&&p&对,你会看到如下的一张图,而且还是可交互的动态图哦。&/p&&p&&br&&/p&&figure&&img src=&/v2-e174fc59c3d5f35b1bf1_b.jpg& data-rawwidth=&1240& data-rawheight=&775& class=&origin_image zh-lightbox-thumb& width=&1240& data-original=&/v2-e174fc59c3d5f35b1bf1_r.jpg&&&/figure&&p&&br&&/p&&p&需要说明的是,由于pyLDAvis这个包兼容性有些问题。因此在某些操作系统和软件环境下,你执行了刚刚的语句后,没有报错,却也没有图形显示出来。&/p&&p&没关系。这时候请你写下以下语句并执行:&/p&&div class=&highlight&&&pre&&code class=&language-text&&&span&&/span&data = pyLDAvis.sklearn.prepare(lda, tf, tf_vectorizer)
pyLDAvis.show(data)
&/code&&/pre&&/div&&p&Jupyter会给你提示一些警告。不用管它。因为此时你的浏览器会弹出一个新的标签页,结果图形会在这个标签页里正确显示出来。&/p&&p&如果你看完了图后,需要继续程序,就回到原先的标签页,点击Kernel菜单下的第一项Interrupt停止绘图,然后往下运行新的语句。&/p&&p&图的左侧,用圆圈代表不同的主题,圆圈的大小代表了每个主题分别包含文章的数量。&/p&&p&图的右侧,列出了最重要(频率最高)的30个关键词列表。注意当你没有把鼠标悬停在任何主题之上的时候,这30个关键词代表全部文本中提取到的30个最重要关键词。&/p&&p&如果你把鼠标悬停在1号上面:&/p&&p&&br&&/p&&figure&&img src=&/v2-a66cf96c9cf_b.jpg& data-rawwidth=&1240& data-rawheight=&775& class=&origin_image zh-lightbox-thumb& width=&1240& data-original=&/v2-a66cf96c9cf_r.jpg&&&/figure&&p&&br&&/p&&p&右侧的关键词列表会立即发生变化,红色展示了每个关键词在当前主题下的频率。&/p&&p&以上是认为设定主题数为5的情况。可如果我们把主题数量设定为10呢?&/p&&p&你不需要重新运行所有代码,只需要执行下面这几行就可以了。&/p&&p&这段程序还是需要运行一段时间,请耐心等待。&/p&&div class=&highlight&&&pre&&code class=&language-text&&&span&&/span&n_topics = 10
lda = LatentDirichletAllocation(n_topics=n_topics, max_iter=50,
learning_method='online',
learning_offset=50.,
random_state=0)
lda.fit(tf)
print_top_words(lda, tf_feature_names, n_top_words)
pyLDAvis.sklearn.prepare(lda, tf, tf_vectorizer)
&/code&&/pre&&/div&&p&程序输出给我们10个主题下最重要的20个关键词。&/p&&div class=&highlight&&&pre&&code class=&language-text&&&span&&/span&Topic #0:
这个 就是 如果 可能 用户 一些 什么 很多 没有 这样 时候 但是 因为 不是 所以 不同 如何 使用 或者 非常
中国 孩子 增长 市场 2016 学生 10 2015 城市 自己 人口 大众 关注 其中 教育 同比 没有 美国 投资 这个
data 变量 距离 http 样本 com www 检验 方法 分布 计算 聚类 如下 分类 之间 两个 一种 差异 表示 序列
电子 采集 应当 或者 案件 保护 规定 信用卡 收集 是否 提取 设备 法律 申请 法院 系统 记录 相关 要求 无法
系统 检索 交通 平台 专利 智能 监控 采集 海量 管理 搜索 智慧 出行 视频 车辆 计算 实现 基于 数据库 存储
可视化 使用 工具 数据库 存储 hadoop 处理 图表 数据仓库 支持 查询 开发 设计 sql 开源 用于 创建 用户 基于 软件
学习 算法 模型 机器 深度 神经网络 方法 训练 特征 分类 网络 使用 基于 介绍 研究 预测 回归 函数 参数 图片
企业 管理 服务 互联网 金融 客户 行业 平台 实现 建立 社会 政府 研究 资源 安全 时代 利用 传统 价值 医疗
人工智能 领域 机器人 智能 公司 人类 机器 学习 未来 已经 研究 他们 识别 可能 计算机 目前 语音 工作 现在 能够
用户 公司 企业 互联网 平台 中国 数据分析 行业 产业 产品 创新 项目 2016 服务 工作 科技 相关 业务 移动 市场
&/code&&/pre&&/div&&p&附带的是可视化的输出结果:&/p&&p&&br&&/p&&figure&&img src=&/v2-bf61cac9b937fa3ef74a4f48_b.jpg& data-rawwidth=&1240& data-rawheight=&775& class=&origin_image zh-lightbox-thumb& width=&1240& data-original=&/v2-bf61cac9b937fa3ef74a4f48_r.jpg&&&/figure&&p&&br&&/p&&p&如果不能直接输出图形,还是按照前面的做法,执行:&/p&&div class=&highlight&&&pre&&code class=&language-text&&&span&&/span&data = pyLDAvis.sklearn.prepare(lda, tf, tf_vectorizer)
pyLDAvis.show(data)
&/code&&/pre&&/div&&p&你马上会发现当主题设定为10的时候,一些有趣的现象发生了——大部分的文章抱团出现在右上方,而2个小部落(8和10)似乎离群索居。我们查看一下这里的8号主题,看看它的关键词构成。&/p&&p&&br&&/p&&figure&&img src=&/v2-0eeef8421941_b.jpg& data-rawwidth=&1240& data-rawheight=&775& class=&origin_image zh-lightbox-thumb& width=&1240& data-original=&/v2-0eeef8421941_r.jpg&&&/figure&&p&&br&&/p&&p&通过高频关键词的描述,我们可以猜测到这一主题主要探讨的是政策和法律法规问题,难怪它和那些技术、算法与应用的主题显得如此格格不入。&/p&&h2&说明&/h2&&p&前文帮助你一步步利用LDA做了主题抽取。成就感爆棚吧?然而这里有两点小问题值得说明。&/p&&p&首先,信息检索的业内专家一看到刚才的关键词列表,就会哈哈大笑——太粗糙了吧!居然没有做中文停用词(stop words)去除!没错,为了演示的流畅,我们这里忽略了许多细节。很多内容使用的是预置默认参数,而且完全忽略了中文停用词设置环节,因此“这个”、“如果”、“可能”、“就是”这样的停用词才会大摇大摆地出现在结果中。不过没有关系,完成比完美重要得多。知道了问题所在,后面改进起来很容易。有机会我会写文章介绍如何加入中文停用词的去除环节。&/p&&p&另外,不论是5个还是10个主题,可能都不是最优的数量选择。你可以根据程序反馈的结果不断尝试。实际上,可以调节的参数远不止这一个。如果你想把全部参数都搞懂,可以继续阅读下面的“原理”部分,按图索骥寻找相关的说明和指引。&/p&&h2&原理&/h2&&p&前文我们没有介绍原理,而是把LDA当成了一个黑箱。不是我不想介绍原理,而是过于复杂。&/p&&p&只给你展示其中的一个公式,你就能管窥其复杂程度了。&/p&&p&&br&&/p&&figure&&img src=&/v2-4c7c15a77f_b.png& data-rawwidth=&1240& data-rawheight=&149& class=&origin_image zh-lightbox-thumb& width=&1240& data-original=&/v2-4c7c15a77f_r.png&&&/figure&&p&&br&&/p&&p&透露给你一个秘密:在计算机科学和数据科学的学术讲座中,讲者在介绍到LDA时,都往往会把原理这部分直接跳过去。&/p&&p&好在你不需要把原理完全搞清楚,再去用LDA抽取主题。&/p&&p&这就像是学开车,你只要懂得如何加速、刹车、换挡、打方向,就能让车在路上行驶了。即便你通过所有考试并取得了驾驶证,你真的了解发动机或电机(如果你开的是纯电车)的构造和工作原理吗?&/p&&p&但是如果你就是希望了解LDA的原理,那么我给你推荐2个学起来不那么痛苦的资源吧。&/p&&p&首先是教程幻灯。slideshare是个寻找教程的好去处。 &a href=&/?target=https%3A//www.slideshare.net/clauwa/topic-models-lda-and-correlated-topic-models%3Fnext_slideshow%3D1& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&这份教程&i class=&icon-external&&&/i&&/a& 浏览量超过20000,内容深入浅出,讲得非常清晰。&/p&&p&&br&&/p&&figure&&img src=&/v2-7d5a32b936efefcc6a00b80aa904433e_b.jpg& data-rawwidth=&1240& data-rawheight=&1179& class=&origin_image zh-lightbox-thumb& width=&1240& data-original=&/v2-7d5a32b936efefcc6a00b80aa904433e_r.jpg&&&/figure&&p&&br&&/p&&p&但如果你跟我一样,是个视觉学习者的话,我更推荐你看 &a href=&/?target=https%3A///watch%3Ftime_continue%3DDBuMu-bdoVrU& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&这段&i class=&icon-external&&&/i&&/a& Youtube视频。&/p&&p&&br&&/p&&figure&&img src=&/v2-96f16fdd53ded36d3f5c_b.jpg& data-rawwidth=&1240& data-rawheight=&775& class=&origin_image zh-lightbox-thumb& width=&1240& data-original=&/v2-96f16fdd53ded36d3f5c_r.jpg&&&/figure&&p&&br&&/p&&p&讲者是Christine Doig,来自Continuum Analytics。咱们一直用的Python套装Anaconda就是该公司的产品。&/p&&p&Christine使用的LDA原理解释模型,不是这个LDA经典论文中的模型图(大部分人觉得这张图不易懂):&/p&&p&&br&&/p&&figure&&img src=&/v2-eaafefa3d6e0e8f1aa2eda_b.png& data-rawwidth=&440& data-rawheight=&217& class=&origin_image zh-lightbox-thumb& width=&440& data-original=&/v2-eaafefa3d6e0e8f1aa2eda_r.png&&&/figure&&p&&br&&/p&&p&她深入阅读了各种文献后,总结了自己的模型图出来:&/p&&p&&br&&/p&&figure&&img src=&/v2-ba08cedd79e5eefba85bd_b.png& data-rawwidth=&1240& data-rawheight=&652& class=&origin_image zh-lightbox-thumb& width=&1240& data-original=&/v2-ba08cedd79e5eefba85bd_r.png&&&/figure&&p&&br&&/p&&p&用这个模型来解释LDA,你会立即有豁然开朗的感觉。&/p&&p&祝探索旅程愉快!&/p&&h2&讨论&/h2&&p&除了本文提到的LDA算法,你还知道哪几种用于主题抽取的机器学习算法?你觉得主题建模(topic model)在信息检索等领域还有哪些可以应用的场景?欢迎留言分享给大家,我们一起交流讨论。&/p&&p&如果你对我的文章感兴趣,欢迎点赞,并且微信关注和置顶我的公众号“玉树芝兰”(nkwangshuyi)。&/p&&p&如果本文可能对你身边的亲友有帮助,也欢迎你把本文通过微博或朋友圈分享给他们。让他们一起参与到我们的讨论中来。&/p&&p&&/p&
你在工作、学习中是否曾因信息过载叫苦不迭?有一种方法能够替你读海量文章,并将不同的主题和对应的关键词抽取出来,让你谈笑间观其大略。本文使用Python对超过1000条文本做主题抽取,一步步带你体会非监督机器学习LDA方法的魅力。想不想试试呢? 淹没每个…
&p&&u&&b&可以画画啊!可以画画啊!可以画画啊!&/b&&/u& 对,有趣的事情需要讲三遍。
事情是这样的,通过python的深度学习算法包去训练计算机模仿世界名画的风格,然后应用到另一幅画中,不多说直接上图!&/p&&img src=&/98cab4f35d9e90b47dee_b.jpg& data-rawwidth=&468& data-rawheight=&600& class=&origin_image zh-lightbox-thumb& width=&468& data-original=&/98cab4f35d9e90b47dee_r.jpg&&&br&&p&这个是世界名画”&i&毕加索的自画像&/i&“(我也不懂什么是世界名画,但是我会google呀哈哈),以这张图片为模板,让计算机去学习这张图片的风格(至于怎么学习请参照这篇国外大牛的论文&a href=&///?target=http%3A//arxiv.org/abs/& class=& external& target=&_blank& rel=&nofollow noreferrer&&&span class=&invisible&&http://&/span&&span class=&visible&&arxiv.org/abs/&/span&&span class=&invisible&&6&/span&&span class=&ellipsis&&&/span&&i class=&icon-external&&&/i&&/a&)应用到自己的这张图片上。&/p&&img src=&/7ca4eb6ca4bc1a9993ed35_b.jpg& data-rawwidth=&249& data-rawheight=&365& class=&content_image& width=&249&&&p&结果就变成下面这个样子了&/p&&img src=&/b869febc05752efd02bc74_b.png& data-rawwidth=&472& data-rawheight=&660& class=&origin_image zh-lightbox-thumb& width=&472& data-original=&/b869febc05752efd02bc74_r.png&&&br&&p&咦,吓死宝宝了,不过好玩的东西当然要身先士卒啦!
接着由于距离开学也越来越近了,为了给广大新生营造一个良好的校园,噗!为了美化校园在新生心目中的形象学长真的不是有意要欺骗你们的。特意制作了下面的《梵高笔下的东华理工大学》,是不是没有听说过这个大学,的确她就是一个普通的二本学校不过这都不是重点。
左边的图片是梵高的《星空》作为模板,中间的图片是待转化的图片,右边的图片是结果&/p&&img src=&/16b2c1522cea0ae43c4d2c1cc6871b29_b.png& data-rawwidth=&852& data-rawheight=&172& class=&origin_image zh-lightbox-thumb& width=&852& data-original=&/16b2c1522cea0ae43c4d2c1cc6871b29_r.png&&&p&这是我们学校的内“湖”(池塘)&/p&&img src=&/14fb75b34_b.png& data-rawwidth=&856& data-rawheight=&173& class=&origin_image zh-lightbox-thumb& width=&856& data-original=&/14fb75b34_r.png&&&p&校园里的樱花广场(个人觉得这是我校最浪漫的地方了)&/p&&img src=&/dd2f5df802_b.png& data-rawwidth=&848& data-rawheight=&206& class=&origin_image zh-lightbox-thumb& width=&848& data-original=&/dd2f5df802_r.png&&&p&不多说,学校图书馆&/p&&img src=&/fdb0d_b.png& data-rawwidth=&851& data-rawheight=&194& class=&origin_image zh-lightbox-thumb& width=&851& data-original=&/fdb0d_r.png&&&p&“池塘”边的柳树&/p&&img src=&/8d8dcbca85b62fd237d4d6fc_b.png& data-rawwidth=&852& data-rawheight=&204& class=&origin_image zh-lightbox-thumb& width=&852& data-original=&/8d8dcbca85b62fd237d4d6fc_r.png&&&p&学校东大门&/p&&img src=&/add0ee7c8_b.png& data-rawwidth=&862& data-rawheight=&164& class=&origin_image zh-lightbox-thumb& width=&862& data-original=&/add0ee7c8_r.png&&&p&学校测绘楼&/p&&img src=&/cb6dbb3e_b.png& data-rawwidth=&852& data-rawheight=&211& class=&origin_image zh-lightbox-thumb& width=&852& data-original=&/cb6dbb3e_r.png&&&p&学校地学楼&/p&&p&为了便于观看,附上生成后的大图:&/p&&img src=&/d7ee30ae90b3d81049abee478e8e75b6_b.png& data-rawwidth=&699& data-rawheight=&408& class=&origin_image zh-lightbox-thumb& width=&699& data-original=&/d7ee30ae90b3d81049abee478e8e75b6_r.png&&&br&&img src=&/c04dbd2a0d3db26d0916206_b.png& data-rawwidth=&700& data-rawheight=&465& class=&origin_image zh-lightbox-thumb& width=&700& data-original=&/c04dbd2a0d3db26d0916206_r.png&&&br&&img src=&/f41d8d145b8a8fe30abba37ede85c778_b.png& data-rawwidth=&695& data-rawheight=&370& class=&origin_image zh-lightbox-thumb& width=&695& data-original=&/f41d8d145b8a8fe30abba37ede85c778_r.png&&&br&&img src=&/797a428a10c9a6278874_b.png& data-rawwidth=&700& data-rawheight=&447& class=&origin_image zh-lightbox-thumb& width=&700& data-original=&/797a428a10c9a6278874_r.png&&&br&&img src=&/d4e2accd9af15bc_b.png& data-rawwidth=&699& data-rawheight=&469& class=&origin_image zh-lightbox-thumb& width=&699& data-original=&/d4e2accd9af15bc_r.png&&&br&&img src=&/7ea97eab41a2c2eb32d356b1afd9ca2f_b.png& data-rawwidth=&698& data-rawheight=&409& class=&origin_image zh-lightbox-thumb& width=&698& data-original=&/7ea97eab41a2c2eb32d356b1afd9ca2f_r.png&&&br&&img src=&/5f1fb9ec9e0d6e66b76a2cc_b.png& data-rawwidth=&699& data-rawheight=&469& class=&origin_image zh-lightbox-thumb& width=&699& data-original=&/5f1fb9ec9e0d6e66b76a2cc_r.png&&&br&&p&别看才区区七张图片,可是这让计算机运行了好长的时间,期间电脑死机两次!&/p&&p&好了广告打完了,下面是福利时间&/p&&h2&&b&在本地用keras搭建风格转移平台&/b&&/h2&&h2&&b&1.相关依赖库的安装&/b&&/h2&&div class=&highlight&&&pre&&code class=&language-text&&# 命令行安装keras、h5py、tensorflow
pip3 install keras
pip3 install h5py
pip3 install tensorflow
&/code&&/pre&&/div&&p&如果tensorflowan命令行安装失败,可以在这里下载whl包&a href=&///?target=http%3A//www.lfd.uci.edu/%7Egohlke/pythonlibs/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&Python Extension Packages for Windows&i class=&icon-external&&&/i&&/a&&a href=&///?target=http%3A//www.lfd.uci.edu/%7Egohlke/pythonlibs/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&(进入网址后ctrl+F输入tensorflow可以快速搜索)&i class=&icon-external&&&/i&&/a&&/p&&h2&&b&2.配置运行环境&/b&&/h2&&p&&b&下载VGG16模型 &/b&&a href=&///?target=https%3A///s/1i5wYN1z& class=& external& target=&_blank& rel=&nofollow noreferrer&&&span class=&invisible&&https://&/span&&span class=&visible&&/s/1i5wYN1&/span&&span class=&invisible&&z&/span&&span class=&ellipsis&&&/span&&i class=&icon-external&&&/i&&/a& 放入如下目录当中&/p&&img src=&/v2-b06fa971e3b6ebcbe3ce39c_b.png& data-rawwidth=&654& data-rawheight=&129& class=&origin_image zh-lightbox-thumb& width=&654& data-original=&/v2-b06fa971e3b6ebcbe3ce39c_r.png&&&h2&&b&3.代码编写&/b&&/h2&&div class=&highlight&&&pre&&code class=&language-python3&&&span class=&kn&&from&/span& &span class=&nn&&__future__&/span& &span class=&k&&import&/span& &span class=&n&&print_function&/span&
&span class=&kn&&from&/span& &span class=&nn&&keras.preprocessing.image&/span& &span class=&k&&import&/span& &span class=&n&&load_img&/span&&span class=&p&&,&/span& &span class=&n&&img_to_array&/span&
&span class=&kn&&from&/span& &span class=&nn&&scipy.misc&/span& &span class=&k&&import&/span& &span class=&n&&imsave&/span&
&span class=&kn&&import&/span& &span class=&nn&&numpy&/span& &span class=&k&&as&/span& &span class=&nn&&np&/span&
&span class=&kn&&from&/span& &span class=&nn&&scipy.optimize&/span& &span class=&k&&import&/span& &span class=&n&&fmin_l_bfgs_b&/span&
&span class=&kn&&import&/span& &span class=&nn&&time&/span&
&span class=&kn&&import&/span& &span class=&nn&&argparse&/span&
&span class=&kn&&from&/span& &span class=&nn&&keras.applications&/span& &span class=&k&&import&/span& &span class=&n&&vgg16&/span&
&span class=&kn&&from&/span& &span class=&nn&&keras&/span& &span class=&k&&import&/span& &span class=&n&&backend&/span& &span class=&k&&as&/span& &span class=&n&&K&/span&
&span class=&n&&parser&/span& &span class=&o&&=&/span& &span class=&n&&argparse&/span&&span class=&o&&.&/span&&span class=&n&&ArgumentParser&/span&&span class=&p&&(&/span&&span class=&n&&description&/span&&span class=&o&&=&/span&&span class=&s&&'Neural style transfer with Keras.'&/span&&span class=&p&&)&/span&
&span class=&n&&parser&/span&&span class=&o&&.&/span&&span class=&n&&add_argument&/span&&span class=&p&&(&/span&&span class=&s&&'base_image_path'&/span&&span class=&p&&,&/span& &span class=&n&&metavar&/span&&span class=&o&&=&/span&&span class=&s&&'base'&/span&&span class=&p&&,&/span& &span class=&nb&&type&/span&&span class=&o&&=&/span&&span class=&nb&&str&/span&&span class=&p&&,&/span&
&span class=&n&&help&/span&&span class=&o&&=&/span&&span class=&s&&'Path to the image to transform.'&/span&&span class=&p&&)&/span&
&span class=&n&&parser&/span&&span class=&o&&.&/span&&span class=&n&&add_argument&/span&&span class=&p&&(&/span&&span class=&s&&'style_reference_image_path'&/span&&span class=&p&&,&/span& &span class=&n&&metavar&/span&&span class=&o&&=&/span&&span class=&s&&'ref'&/span&&span class=&p&&,&/span& &span class=&nb&&type&/span&&span class=&o&&=&/span&&span class=&nb&&str&/span&&span class=&p&&,&/span&
&span class=&n&&help&/span&&span class=&o&&=&/span&&span class=&s&&'Path to the style reference image.'&/span&&span class=&p&&)&/span&
&span class=&n&&parser&/span&&span class=&o&&.&/span&&span class=&n&&add_argument&/span&&span class=&p&&(&/span&&span class=&s&&'result_prefix'&/span&&span class=&p&&,&/span& &span class=&n&&metavar&/span&&span class=&o&&=&/span&&span class=&s&&'res_prefix'&/span&&span class=&p&&,&/span& &span class=&nb&&type&/span&&span class=&o&&=&/span&&span class=&nb&&str&/span&&span class=&p&&,&/span&
&span class=&n&&help&/span&&span class=&o&&=&/span&&span class=&s&&'Prefix for the saved results.'&/span&&span class=&p&&)&/span&
&span class=&n&&parser&/span&&span class=&o&&.&/span&&span class=&n&&add_argument&/span&&span class=&p&&(&/span&&span class=&s&&'--iter'&/span&&span class=&p&&,&/span& &span class=&nb&&type&/span&&span class=&o&&=&/span&&span class=&nb&&int&/span&&span class=&p&&,&/span& &span class=&n&&default&/span&&span class=&o&&=&/span&&span class=&mi&&10&/span&&span class=&p&&,&/span& &span class=&n&&required&/span&&span class=&o&&=&/span&&span class=&k&&False&/span&&span class=&p&&,&/span&
&span class=&n&&help&/span&&span class=&o&&=&/span&&span class=&s&&'Number of iterations to run.'&/span&&span class=&p&&)&/span&
&span class=&n&&parser&/span&&span class=&o&&.&/span&&span class=&n&&add_argument&/span&&span class=&p&&(&/span&&span class=&s&&'--content_weight'&/span&&span class=&p&&,&/span& &span class=&nb&&type&/span&&span class=&o&&=&/span&&span class=&nb&&float&/span&&span class=&p&&,&/span& &span class=&n&&default&/span&&span class=&o&&=&/span&&span class=&mf&&0.025&/span&&span class=&p&&,&/span& &span class=&n&&required&/span&&span class=&o&&=&/span&&span class=&k&&False&/span&&span class=&p&&,&/span&
&span class=&n&&help&/span&&span class=&o&&=&/span&&span class=&s&&'Content weight.'&/span&&span class=&p&&)&/span&
&span class=&n&&parser&/span&&span class=&o&&.&/span&&span class=&n&&add_argument&/span&&span class=&p&&(&/span&&span class=&s&&'--style_weight'&/span&&span class=&p&&,&/span& &span class=&nb&&type&/span&&span class=&o&&=&/span&&span class=&nb&&float&/span&&span class=&p&&,&/span& &span class=&n&&default&/span&&span class=&o&&=&/span&&span class=&mf&&1.0&/span&&span class=&p&&,&/span& &span class=&n&&required&/span&&span class=&o&&=&/span&&span class=&k&&False&/span&&span class=&p&&,&/span&
&span class=&n&&help&/span&&span class=&o&&=&/span&&span class=&s&&'Style weight.'&/span&&span class=&p&&)&/span&
&span class=&n&&parser&/span&&span class=&o&&.&/span&&span class=&n&&add_argument&/span&&span class=&p&&(&/span&&span class=&s&&'--tv_weight'&/span&&span class=&p&&,&/span& &span class=&nb&&type&/span&&span class=&o&&=&/span&&span class=&nb&&float&/span&&span class=&p&&,&/span& &span class=&n&&default&/span&&span class=&o&&=&/span&&span class=&mf&&1.0&/span&&span class=&p&&,&/span& &span class=&n&&required&/span&&span class=&o&&=&/span&&span class=&k&&False&/span&&span class=&p&&,&/span&
&span class=&n&&help&/span&&span class=&o&&=&/span&&span class=&s&&'Total Variation weight.'&/span&&span class=&p&&)&/span&
&span class=&n&&args&/span& &span class=&o&&=&/span& &span class=&n&&parser&/span&&span class=&o&&.&/span&&span class=&n&&parse_args&/span&&span class=&p&&()&/span&
&span class=&n&&base_image_path&/span& &span class=&o&&=&/span& &span class=&n&&args&/span&&span class=&o&&.&/span&&span class=&n&&base_image_path&/span&
&span class=&n&&style_reference_image_path&/span& &span class=&o&&=&/span& &span class=&n&&args&/span&&span class=&o&&.&/span&&span class=&n&&style_reference_image_path&/span&
&span class=&n&&result_prefix&/span& &span class=&o&&=&/span& &span class=&n&&args&/span&&span class=&o&&.&/span&&span class=&n&&result_prefix&/span&
&span class=&n&&iterations&/span& &span class=&o&&=&/span& &span class=&n&&args&/span&&span class=&o&&.&/span&&span class=&n&&iter&/span&
&span class=&c&&# these are the weights of the different loss components&/span&
&span class=&n&&total_variation_weight&/span& &span class=&o&&=&/span& &span class=&n&&args&/span&&span class=&o&&.&/span&&span class=&n&&tv_weight&/span&
&span class=&n&&style_weight&/span& &span class=&o&&=&/span& &span class=&n&&args&/span&&span class=&o&&.&/span&&span class=&n&&style_weight&/span&
&span class=&n&&content_weight&/span& &span class=&o&&=&/span& &span class=&n&&args&/span&&span class=&o&&.&/span&&span class=&n&&content_weight&/span&
&span class=&c&&# dimensions of the generated picture.&/span&
&span class=&n&&width&/span&&span class=&p&&,&/span& &span class=&n&&height&/span& &span class=&o&&=&/span& &span class=&n&&load_img&/span&&span class=&p&&(&/span&&span class=&n&&base_image_path&/span&&span class=&p&&)&/span&&span class=&o&&.&/span&&span class=&n&&size&/span&
&span class=&n&&img_nrows&/span& &span class=&o&&=&/span& &span class=&mi&&400&/span&
&span class=&n&&img_ncols&/span& &span class=&o&&=&/span& &span class=&nb&&int&/span&&span class=&p&&(&/span&&span class=&n&&width&/span& &span class=&o&&*&/span& &span class=&n&&img_nrows&/span& &span class=&o&&/&/span& &span class=&n&&height&/span&&span class=&p&&)&/span&
&span class=&c&&# util function to open, resize and format pictures into appropriate tensors&/span&
&span class=&k&&def&/span& &span class=&nf&&preprocess_image&/span&&span class=&p&&(&/span&&span class=&n&&image_path&/span&&span class=&p&&):&/span&
&span class=&n&&img&/span& &span class=&o&&=&/span& &span class=&n&&load_img&/span&&span class=&p&&(&/span&&span class=&n&&image_path&/span&&span class=&p&&,&/span& &span class=&n&&target_size&/span&&span class=&o&&=&/span&&span class=&p&&(&/span&&span class=&n&&img_nrows&/span&&span class=&p&&,&/span& &span class=&n&&img_ncols&/span&&span class=&p&&))&/span&
&span class=&n&&img&/span& &span class=&o&&=&/span& &span class=&n&&img_to_array&/span&&span class=&p&&(&/span&&span class=&n&&img&/span&&span class=&p&&)&/span&
&span class=&n&&img&/span& &span class=&o&&=&/span& &span class=&n&&np&/span&&span class=&o&&.&/span&&span class=&n&&expand_dims&/span&&span class=&p&&(&/span&&span class=&n&&img&/span&&span class=&p&&,&/span& &span class=&n&&axis&/span&&span class=&o&&=&/span&&span class=&mi&&0&/span&&span class=&p&&)&/span&
&span class=&n&&img&/span& &span class=&o&&=&/span& &span class=&n&&vgg16&/span&&span class=&o&&.&/span&&span class=&n&&preprocess_input&/span&&span class=&p&&(&/span&&span class=&n&&img&/span&&span class=&p&&)&/span&
&span class=&k&&return&/span& &span class=&n&&img&/span&
&span class=&c&&# util function to convert a tensor into a valid image&/span&
&span class=&k&&def&/span& &span class=&nf&&deprocess_image&/span&&span class=&p&&(&/span&&span class=&n&&x&/span&&span class=&p&&):&/span&
&span class=&k&&if&/span& &span class=&n&&K&/span&&span class=&o&&.&/span&&span class=&n&&image_data_format&/span&&span class=&p&&()&/span& &span class=&o&&==&/span& &span class=&s&&'channels_first'&/span&&span class=&p&&:&/span&
&span class=&n&&x&/span& &span class=&o&&=&/span& &span class=&n&&x&/span&&span class=&o&&.&/span&&span class=&n&&reshape&/span&&span class=&p&&((&/span&&span class=&mi&&3&/span&&span class=&p&&,&/span& &span class=&n&&img_nrows&/span&&span class=&p&&,&/span& &span class=&n&&img_ncols&/span&&span class=&p&&))&/span&
&span class=&n&&x&/span& &span class=&o&&=&/span& &span class=&n&&x&/span&&span class=&o&&.&/span&&span class=&n&&transpose&/span&&span class=&p&&((&/span&&span class=&mi&&1&/span&&span class=&p&&,&/span& &span class=&mi&&2&/span&&span class=&p&&,&/span& &span class=&mi&&0&/span&&span class=&p&&))&/span&
&span class=&k&&else&/span&&span class=&p&&:&/span&
&span class=&n&&x&/span& &span class=&o&&=&/span& &span class=&n&&x&/span&&span class=&o&&.&/span&&span class=&n&&reshape&/span&&span class=&p&&((&/span&&span class=&n&&img_nrows&/span&&span class=&p&&,&/span& &span class=&n&&img_ncols&/span&&span class=&p&&,&/span& &span class=&mi&&3&/span&&span class=&p&&))&/span&
&span class=&c&&# Remove zero-center by mean pixel&/span&
&span class=&n&&x&/span&&span class=&p&&[:,&/span& &span class=&p&&:,&/span& &span class=&mi&&0&/span&&span class=&p&&]&/span& &span class=&o&&+=&/span& &span class=&mf&&103.939&/span&
&span class=&n&&x&/span&&span class=&p&&[:,&/span& &span class=&p&&:,&/span& &span class=&mi&&1&/span&&span class=&p&&]&/span& &span class=&o&&+=&/span& &span class=&mf&&116.779&/span&
&span class=&n&&x&/span&&span class=&p&&[:,&/span& &span class=&p&&:,&/span& &span class=&mi&&2&/span&&span class=&p&&]&/span& &span class=&o&&+=&/span& &span class=&mf&&123.68&/span&
&span class=&c&&# 'BGR'-&'RGB'&/span&
&span class=&n&&x&/span& &span class=&o&&=&/span& &span class=&n&&x&/span&&span class=&p&&[:,&/span& &span class=&p&&:,&/span& &span class=&p&&::&/span&&span class=&o&&-&/span&&span class=&mi&&1&/span&&span class=&p&&]&/span&
&span class=&n&&x&/span& &span class=&o&&=&/span& &span class=&n&&np&/span&&span class=&o&&.&/span&&span class=&n&&clip&/span&&span class=&p&&(&/span&&span class=&n&&x&/span&&span class=&p&&,&/span& &span class=&mi&&0&/span&&span class=&p&&,&/span& &span class=&mi&&255&/span&&span class=&p&&)&/span&&span class=&o&&.&/span&&span class=&n&&astype&/span&&span class=&p&&(&/span&&span class=&s&&'uint8'&/span&&span class=&p&&)&/span&
&span class=&k&&return&/span& &span class=&n&&x&/span&
&span class=&c&&# get tensor representations of our images&/span&
&span class=&n&&base_image&/span& &span class=&o&&=&/span& &span class=&n&&K&/span&&span class=&o&&.&/span&&span class=&n&&variable&/span&&span class=&p&&(&/span&&span class=&n&&preprocess_image&/span&&span class=&p&&(&/span&&span class=&n&&base_image_path&/span&&span class=&p&&))&/span&
&span class=&n&&style_reference_image&/span& &span class=&o&&=&/span& &span class=&n&&K&/span&&span class=&o&&.&/span&&span class=&n&&variable&/span&&span class=&p&&(&/span&&span class=&n&&preprocess_image&/span&&span class=&p&&(&/span&&span class=&n&&style_reference_image_path&/span&&span class=&p&&))&/span&
&span class=&c&&# this will contain our generated image&/span&
&span class=&k&&if&/span& &span class=&n&&K&/span&&span class=&o&&.&/span&&span class=&n&&image_data_format&/span&&span class=&p&&()&/span& &span class=&o&&==&/span& &span class=&s&&'channels_first'&/span&&span class=&p&&:&/span&
&span class=&n&&combination_image&/span& &span class=&o&&=&/span& &span class=&n&&K&/span&&span class=&o&&.&/span&&span class=&n&&placeholder&/span&&span class=&p&&((&/span&&span class=&mi&&1&/span&&span class=&p&&,&/span& &span class=&mi&&3&/span&&span class=&p&&,&/span& &span class=&n&&img_nrows&/span&&span class=&p&&,&/span& &span class=&n&&img_ncols&/span&&span class=&p&&))&/span&
&span class=&k&&else&/span&&span class=&p&&:&/span&
&span class=&n&&combination_image&/span& &span class=&o&&=&/span& &span class=&n&&K&/span&&span class=&o&&.&/span&&span class=&n&&placeholder&/span&&span class=&p&&((&/span&&span class=&mi&&1&/span&&span class=&p&&,&/span& &span class=&n&&img_nrows&/span&&span class=&p&&,&/span& &span class=&n&&img_ncols&/span&&span class=&p&&,&/span& &span class=&mi&&3&/span&&span class=&p&&))&/span&
&span class=&c&&# combine the 3 images into a single Keras tensor&/span&
&span class=&n&&input_tensor&/span& &span class=&o&&=&/span& &span class=&n&&K&/span&&span class=&o&&.&/span&&span class=&n&&concatenate&/span&&span class=&p&&([&/span&&span class=&n&&base_image&/span&&span class=&p&&,&/span&
&span class=&n&&style_reference_image&/span&&span class=&p&&,&/span&
&span class=&n&&combination_image&/span&&span class=&p&&],&/span& &span class=&n&&axis&/span&&span class=&o&&=&/span&&span class=&mi&&0&/span&&span class=&p&&)&/span&
&span class=&c&&# build the VGG16 network with our 3 images as input&/span&
&span class=&c&&# the model will be loaded with pre-trained ImageNet weights&/span&
&span class=&n&&model&/span& &span class=&o&&=&/span& &span class=&n&&vgg16&/span&&span class=&o&&.&/span&&span class=&n&&VGG16&/span&&span class=&p&&(&/span&&span class=&n&&input_tensor&/span&&span class=&o&&=&/span&&span class=&n&&input_tensor&/span&&span class=&p&&,&/span&
&span class=&n&&weights&/span&&span class=&o&&=&/span&&span class=&s&&'imagenet'&/span&&span class=&p&&,&/span& &span class=&n&&include_top&/span&&span class=&o&&=&/span&&span class=&k&&False&/span&&span class=&p&&)&/span&
&span class=&nb&&print&/span&&span class=&p&&(&/span&&span class=&s&&'Model loaded.'&/span&&span class=&p&&)&/span&
&span class=&c&&# get the symbolic outputs of each &key& layer (we gave them unique names).&/span&
&span class=&n&&outputs_dict&/span& &span class=&o&&=&/span& &span class=&nb&&dict&/span&&span class=&p&&([(&/span&&span class=&n&&layer&/span&&span class=&o&&.&/span&&span class=&n&&name&/span&&span class=&p&&,&/span& &span class=&n&&layer&/span&&span class=&o&&.&/span&&span class=&n&&output&/span&&span class=&p&&)&/span& &span class=&k&&for&/span& &span class=&n&&layer&/span& &span class=&ow&&in&/span& &span class=&n&&model&/span&&span class=&o&&.&/span&&span class=&n&&layers&/span&&span class=&p&&])&/span&
&span class=&c&&# compute the neural style loss&/span&
&span class=&c&&# first we need to define 4 util functions&/span&
&span class=&c&&# the gram matrix of an image tensor (feature-wise outer product)&/span&
&span class=&k&&def&/span& &span class=&nf&&gram_matrix&/span&&span class=&p&&(&/span&&span class=&n&&x&/span&&span class=&p&&):&/span&
&span class=&k&&assert&/span& &span class=&n&&K&/span&&span class=&o&&.&/span&&span class=&n&&ndim&/span&&span class=&p&&(&/span&&span class=&n&&x&/span&&span class=&p&&)&/span& &span class=&o&&==&/span& &span class=&mi&&3&/span&
&span class=&k&&if&/span& &span class=&n&&K&/span&&span class=&o&&.&/span&&span class=&n&&image_data_format&/span&&span class=&p&&()&/span& &span class=&o&&==&/span& &span class=&s&&'channels_first'&/span&&span class=&p&&:&/span&
&span class=&n&&features&/span& &span class=&o&&=&/span& &span class=&n&&K&/span&&span class=&o&&.&/span&&span class=&n&&batch_flatten&/span&&span class=&p&&(&/span&&span class=&n&&x&/span&&span class=&p&&)&/span&
&span class=&k&&else&/span&&span class=&p&&:&/span&
&span class=&n&&features&/span& &span class=&o&&=&/span& &span class=&n&&K&/span&&span class=&o&&.&/span&&span class=&n&&batch_flatten&/span&&span class=&p&&(&/span&&span class=&n&&K&/span&&span class=&o&&.&/span&&span class=&n&&permute_dimensions&/span&&span class=&p&&(&/span&&span class=&n&&x&/span&&span class=&p&&,&/span& &span class=&p&&(&/span&&span class=&mi&&2&/span&&span class=&p&&,&/span& &span class=&mi&&0&/span&&span class=&p&&,&/span& &span class=&mi&&1&/span&&span class=&p&&)))&/span&
&span class=&n&&gram&/span& &span class=&o&&=&/span& &span class=&n&&K&/span&&span class=&o&&.&/span&&span class=&n&&dot&/span&&span class=&p&&(&/span&&span class=&n&&features&/span&&span class=&p&&,&/span& &span class=&n&&K&/span&&span class=&o&&.&/span&&span class=&n&&transpose&/span&&span class=&p&&(&/span&&span class=&n&&features&/span&&span class=&p&&))&/span&
&span class=&k&&return&/span& &span class=&n&&gram&/span&
&span class=&c&&# the &style loss& is designed to maintain&/span&
&span class=&c&&# the style of the reference image in the generated image.&/span&
&span class=&c&&# It is based on the gram matrices (which capture style) of&/span&
&span class=&c&&# feature maps from the style reference image&/span&
&span class=&c&&# and from the generated image&/span&
&span class=&k&&def&/span& &span class=&nf&&style_loss&/span&&span class=&p&&(&/span&&span class=&n&&style&/span&&span class=&p&&,&/span& &span class=&n&&combination&/span&&span class=&p&&):&/span&
&span class=&k&&assert&/span& &span class=&n&&K&/span&&span class=&o&&.&/span&&span class=&n&&ndim&/span&&span class=&p&&(&/span&&span class=&n&&style&/span&&span class=&p&&)&/span& &span class=&o&&==&/span& &span class=&mi&&3&/span&
&span class=&k&&assert&/span& &span class=&n&&K&/span&&span class=&o&&.&/span&&span class=&n&&ndim&/span&&span class=&p&&(&/span&&span class=&n&&combination&/span&&span class=&p&&)&/span& &span class=&o&&==&/span& &span class=&mi&&3&/span&
&span class=&n&&S&/span& &span class=&o&&=&/span& &span class=&n&&gram_matrix&/span&&span class=&p&&(&/span&&span class=&n&&style&/span&&span class=&p&&)&/span&
&span class=&n&&C&/span& &span class=&o&&=&/span& &span class=&n&&gram_matrix&/span&&span class=&p&&(&/span&&span class=&n&&combination&/span&&span class=&p&&)&/span&
&span class=&n&&channels&/span& &span class=&o&&=&/span& &span class=&mi&&3&/span&
&span class=&n&&size&/span& &span class=&o&&=&/span& &span class=&n&&img_nrows&/span& &span class=&o&&*&/span& &span class=&n&&img_ncols&/span&
&span class=&k&&return&/span& &span class=&n&&K&/span&&span class=&o&&.&/span&&span class=&n&&sum&/span&&span class=&p&&(&/span&&span class=&n&&K&/span&&span class=&o&&.&/span&&span class=&n&&square&/span&&span class=&p&&(&/span&&span class=&n&&S&/span& &span class=&o&&-&/span& &span class=&n&&C&/span&&span class=&p&&))&/span& &span class=&o&&/&/span& &span class=&p&&(&/span&&span class=&mf&&4.&/span& &span class=&o&&*&/span& &span class=&p&&(&/span&&span class=&n&&channels&/span& &span class=&o&&**&/span& &span class=&mi&&2&/span&&span class=&p&&)&/span& &span class=&o&&*&/span& &span class=&p&&(&/span&&span class=&n&&size&/span& &span class=&o&&**&/span& &span class=&mi&&2&/span&&span class=&p&&))&/span&
&span class=&c&&# an auxiliary loss function&/span&
&span class=&c&&# designed to maintain the &content& of the&/span&
&span class=&c&&# base image in the generated image&/span&
&span class=&k&&def&/span& &span class=&nf&&content_loss&/span&&span class=&p&&(&/span&&span class=&n&&base&/span&&span class=&p&&,&/span& &span class=&n&&combination&/span&&span class=&p&&):&/span&
&span class=&k&&return&/span& &span class=&n&&K&/span&&span class=&o&&.&/span&&span class=&n&&sum&/span&&span class=&p&&(&/span&&span class=&n&&K&/span&&span class=&o&&.&/span&&span class=&n&&square&/span&&span class=&p&&(&/span&&span class=&n&&combination&/span& &span class=&o&&-&/span& &span class=&n&&base&/span&&span class=&p&&))&/span&
&span class=&c&&# the 3rd loss function, total variation loss,&/span&
&span class=&c&&# designed to keep the generated image locally coherent&/span&
&span class=&k&&def&/span& &span class=&nf&&total_variation_loss&/span&&span class=&p&&(&/span&&span class=&n&&x&/span&&span class=&p&&):&/span&
&span class=&k&&assert&/span& &span class=&n&&K&/span&&span class=&o&&.&/span&&span class=&n&&ndim&/span&&span class=&p&&(&/span&&span class=&n&&x&/span&&span class=&p&&)&/span& &span class=&o&&==&/span& &span class=&mi&&4&/span&
&span class=&k&&if&/span& &span class=&n&&K&/span&&span class=&o&&.&/span&&span class=&n&&image_data_format&/span&&span class=&p&&()&/span& &span class=&o&&==&/span& &span class=&s&&'channels_first'&/span&&span class=&p&&:&/span&
&span class=&n&&a&/span& &span class=&o&&=&/span& &span class=&n&&K&/span&&span class=&o&&.&/span&&span class=&n&&square&/span&&span class=&p&&(&/span&&span class=&n&&x&/span&&span class=&p&&[:,&/span& &span class=&p&&:,&/span& &span class=&p&&:&/span&&span class=&n&&img_nrows&/span& &span class=&o&&-&/span& &span class=&mi&&1&/span&&span class=&p&&,&/span& &span class=&p&&:&/span&&span class=&n&&img_ncols&/span& &span class=&o&&-&/span& &span class=&mi&&1&/span&&span class=&p&&]&/span& &span class=&o&&-&/span& &span class=&n&&x&/span&&span class=&p&&[:,&/span& &span class=&p&&:,&/span& &span class=&mi&&1&/span&&span class=&p&&:,&/span& &span class=&p&&:&/span&&span class=&n&&img_ncols&/span& &span class=&o&&-&/span& &span class=&mi&&1&/span&&span class=&p&&])&/span&
&span class=&n&&b&/span& &span class=&o&&=&/span& &span class=&n&&K&/span&&span class=&o&&.&/span&&span class=&n&&square&/span&&span class=&p&&(&/span&&span class=&n&&x&/span&&span class=&p&&[:,&/span& &span class=&p&&:,&/span& &span class=&p&&:&/span&&span class=&n&&img_nrows&/span& &span class=&o&&-&/span& &span class=&mi&&1&/span&&span class=&p&&,&/span& &span class=&p&&:&/span&&span class=&n&&img_ncols&/span& &span class=&o&&-&/span& &span class=&mi&&1&/span&&span class=&p&&]&/span& &span class=&o&&-&/span& &span class=&n&&x&/span&&span class=&p&&[:,&/span& &span class=&p&&:,&/span& &span class=&p&&:&/span&&span class=&n&&img_nrows&/span& &span class=&o&&-&/span& &span class=&mi&&1&/span&&span class=&p&&,&/span& &span class=&mi&&1&/span&&span class=&p&&:])&/span&
&span class=&k&&else&/span&&span class=&p&&:&/span&
&span class=&n&&a&/span& &span class=&o&&=&/span& &span class=&n&&K&/span&&span class=&o&&.&/span&&span class=&n&&square&/span&&span class=&p&&(&/span&&span class=&n&&x&/span&&span class=&p&&[:,&/span& &span class=&p&&:&/span&&span class=&n&&img_nrows&/span& &span class=&o&&-&/span& &span class=&mi&&1&/span&&span class=&p&&,&/span& &span class=&p&&:&/span&&span class=&n&&img_ncols&/span& &span class=&o&&-&/span& &span class=&mi&&1&/span&&span class=&p&&,&/span& &span class=&p&&:]&/span& &span class=&o&&-&/span& &span class=&n&&x&/span&&span class=&p&&[:,&/span& &span class=&mi&&1&/span&&span class=&p&&:,&/span& &span class=&p&&:&/span&&span class=&n&&img_ncols&/span& &span class=&o&&-&/span& &span class=&mi&&1&/span&&span class=&p&&,&/span& &span class=&p&&:])&/span&
&span class=&n&&b&/span& &span class=&o&&=&/span& &span class=&n&&K&/span&&span class=&o&&.&/span&&span class=&n&&square&/span&&span class=&p&&(&/span&&span class=&n&&x&/span&&span class=&p&&[:,&/span& &span class=&p&&:&/span&&span class=&n&&img_nrows&/span& &span class=&o&&-&/span& &span class=&mi&&1&/span&&span class=&p&&,&/span& &span class=&p&&:&/span&&span class=&n&&img_ncols&/span& &span class=&o&&-&/span& &span class=&mi&&1&/span&&span class=&p&&,&/span& &span class=&p&&:]&/span& &span class=&o&&-&/span& &span class=&n&&x&/span&&span class=&p&&[:,&/span& &span class=&p&&:&/span&&span class=&n&&img_nrows&/span& &span class=&o&&-&/span& &span class=&mi&&1&/span&&span class=&p&&,&/span& &span class=&mi&&1&/span&&span class=&p&&:,&/span& &span class=&p&&:])&/span&
&span class=&k&&return&/span& &span class=&n&&K&/span&&span class=&o&&.&/span&&span class=&n&&sum&/span&&span class=&p&&(&/span&&span class=&n&&K&/span&&span class=&o&&.&/span&&span class=&n&&pow&/span&&span class=&p&&(&/span&&span class=&n&&a&/span& &span class=&o&&+&/span& &span class=&n&&b&/span&&span class=&p&&,&/span& &span class=&mf&&1.25&/span&&span class=&p&&))&/span&
&span class=&c&&# combine these loss functions into a single scalar&/span&
&span class=&n&&loss&/span& &span class=&o&&=&/span& &span class=&n&&K&/span&&span class=&o&&.&/span&&span class=&n&&variable&/span&&span class=&p&&(&/span&&span class=&mf&&0.&/span&&span class=&p&&)&/span&
&span class=&n&&layer_features&/span& &span class=&o&&=&/span& &span class=&n&&outputs_dict&/span&&span class=&p&&[&/span&&span class=&s&&'block4_conv2'&/span&&span class=&p&&]&/span&
&span class=&n&&base_image_features&/span& &span class=&o&&=&/span& &span class=&n&&layer_features&/span&&span class=&p&&[&/span&&span class=&mi&&0&/span&&span class=&p&&,&/span& &span class=&p&&:,&/span& &span class=&p&&:,&/span& &span class=&p&&:]&/span&
&span class=&n&&combination_features&/span& &span class=&o&&=&/span& &span class=&n&&layer_features&/span&&span class=&p&&[&/span&&span class=&mi&&2&/span&&span class=&p&&,&/span& &span class=&p&&:,&/span& &span class=&p&&:,&/span& &span class=&p&&:]&/span&
&span class=&n&&loss&/span& &span class=&o&&+=&/span& &span class=&n&&content_weight&/span& &span class=&o&&*&/span& &span class=&n&&content_loss&/span&&span class=&p&&(&/span&&span class=&n&&base_image_features&/span&&span class=&p&&,&/span&
&span class=&n&&combination_features&/span&&span class=&p&&)&/span&
&span class=&n&&feature_layers&/span& &span class=&o&&=&/span& &span class=&p&&[&/span&&span class=&s&&'block1_conv1'&/span&&span class=&p&&,&/span& &span class=&s&&'block2_conv1'&/span&&span class=&p&&,&/span&
&span class=&s&&'block3_conv1'&/span&&span class=&p&&,&/span& &span class=&s&&'block4_conv1'&/span&&span class=&p&&,&/span&
&span class=&s&&'block5_conv1'&/span&&span class=&p&&]&/span&
&span class=&k&&for&/span& &span class=&n&&layer_name&/span& &span class=&ow&&in&/span& &span class=&n&&feature_layers&/span&&span class=&p&&:&/span&
&span class=&n&&layer_features&/span& &span class=&o&&=&/span& &span class=&n&&outputs_dict&/span&&span class=&p&&[&/span&&span class=&n&&layer_name&/span&&span class=&p&&]&/span&
&span class=&n&&style_reference_features&/span& &span class=&o&&=&/span& &span class=&n&&layer_features&/span&&span class=&p&&[&/span&&span class=&mi&&1&/span&&span class=&p&&,&/span& &span class=&p&&:,&/span& &span class=&p&&:,&/span& &span class=&p&&:]&/span&
&span class=&n&&combination_features&/span& &span class=&o&&=&/span& &span class=&n&&layer_features&/span&&span class=&p&&[&/span&&span class=&mi&&2&/span&&span class=&p&&,&/span& &span class=&p&&:,&/span& &span class=&p&&:,&/span& &span class=&p&&:]&/span&
&span class=&n&&sl&/span& &span class=&o&&=&/span& &span class=&n&&style_loss&/span&&span class=&p&&(&/span&&span class=&n&&style_reference_features&/span&&span class=&p&&,&/span& &span class=&n&&combination_features&/span&&span class=&p&&)&/span&
&span class=&n&&loss&/span& &span class=&o&&+=&/span& &span class=&p&&(&/span&&span class=&n&&style_weight&/span& &span class=&o&&/&/span& &span class=&nb&&len&/span&&span class=&p&&(&/span&&span class=&n&&feature_layers&/span&&span class=&p&&))&/span& &span class=&o&&*&/span& &span class=&n&&sl&/span&
&span class=&n&&loss&/span& &span class=&o&&+=&/span& &span class=&n&&total_variation_weight&/span& &span class=&o&&*&/span& &span class=&n&&total_variation_loss&/span&&span class=&p&&(&/span&&span class=&n&&combination_image&/span&&span class=&p&&)&/span&
&span class=&c&&# get the gradients of the generated image wrt the loss&/span&
&span class=&n&&grads&/span& &span class=&o&&=&/span& &span class=&n&&K&/span&&span class=&o&&.&/span&&span class=&n&&gradients&/span&&span class=&p&&(&/span&&span class=&n&&loss&/span&&span class=&p&&,&/span& &span class=&n&&combination_image&/span&&span class=&p&&)&/span&
&span class=&n&&outputs&/span& &span class=&o&&=&/span& &span class=&p&&[&/span&&span class=&n&&loss&/span&&span class=&p&&]&/span&
&span class=&k&&if&/span& &span class=&nb&&isinstance&/span&&span class=&p&&(&/span&&span class=&n&&grads&/span&&span class=&p&&,&/span& &span class=&p&&(&/span&&span class=&nb&&list&/span&&span class=&p&&,&/span& &span class=&nb&&tuple&/span&&span class=&p&&)):&/span&
&span class=&n&&outputs&/span& &span class=&o&&+=&/span& &span class=&n&&grads&/span&
&span class=&k&&else&/span&&span class=&p&&:&/span&
&span class=&n&&outputs&/span&&span class=&o&&.&/span&&span class=&n&&append&/span&&span class=&p&&(&/span&&span class=&n&&grads&/span&&span class=&p&&)&/span&
&span class=&n&&f_outputs&/span& &span class=&o&&=&/span& &span class=&n&&K&/span&&span class=&o&&.&/span&&span class=&n&&function&/span&&span class=&p&&([&/span&&span class=&n&&combination_image&/span&&span class=&p&&],&/span& &span class=&n&&outputs&/span&&span class=&p&&)&/span&
&span class=&k&&def&/span& &span class=&nf&&eval_loss_and_grads&/span&&span class=&p&&(&/span&&span class=&n&&x&/span&&span class=&p&&):&/span&
&span class=&k&&if&/span& &span class=&n&&K&/span&&span class=&o&&.&/span&&span class=&n&&image_data_format&/span&&span class=&p&&()&/span& &span class=&o&&==&/span& &span class=&s&&'channels_first'&/span&&span class=&p&&:&/span&
&span class=&n&&x&/span& &span class=&o&&=&/span& &span class=&n&&x&/span&&span class=&o&&.&/span&&span class=&n&&reshape&/span&&span class=&p&&((&/span&&span class=&mi&&1&/span&&span class=&p&&,&/span& &span class=&mi&&3&/span&&span class=&p&&,&/span& &span class=&n&&img_nrows&/span&&span class=&p&&,&/span& &span class=&n&&img_ncols&/span&&span class=&p&&))&/span&
&span class=&k&&else&/span&&span class=&p&&:&/span&
&span class=&n&&x&/span& &span class=&o&&=&/span& &span class=&n&&x&/span&&span class=&o&&.&/span&&span class=&n&&reshape&/span&&span class=&p&&((&/span&&span class=&mi&&1&/span&&span class=&p&&,&/span& &span class=&n&&img_nrows&/span&&span class=&p&&,&/span& &span class=&n&&img_ncols&/span&&span class=&p&&,&/span& &span class=&mi&&3&/span&&span class=&p&&))&/span&
&span class=&n&&outs&/span& &span class=&o&&=&/span& &span class=&n&&f_outputs&/span&&span class=&p&&([&/span&&span class=&n&&x&/span&&span class=&p&&])&/span&
&span class=&n&&loss_value&/span& &span class=&o&&=&/span& &span class=&n&&outs&/span&&span class=&p&&[&/span&&span class=&mi&&0&/span&&span class=&p&&]&/span&
&span class=&k&&if&/span& &span class=&nb&&len&/span&&span class=&p&&(&/span&&span class=&n&&outs&/span&&span class=&p&&[&/span&&span class=&mi&&1&/span&&span class=&p&&:])&/span& &span class=&o&&==&/span& &span class=&mi&&1&/span&&span class=&p&&:&/span&
&span class=&n&&grad_values&/span& &span class=&o&&=&/span& &span class=&n&&outs&/span&&span class=&p&&[&/span&&span class=&mi&&1&/span&}

我要回帖

更多关于 bookben书包网 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信