色综合图-色综合图片-色综合图片二区150p-色综合图区-玖玖国产精品视频-玖玖香蕉视频

您的位置:首頁技術文章
文章詳情頁

Python根據URL地址下載文件并保存至對應目錄的實現

瀏覽:5日期:2022-07-05 13:50:15

引言

在編程中經常會遇到圖片等數據集將圖片等數據以URL形式存儲在txt文檔中,為便于后續的分析,需要將其下載下來,并按照文件夾分類存儲。本文以Github中Alexander Kim提供的圖片分類數據集為例,下載其提供的圖片樣本并分類保存

Python 3.6.5,Anaconda, VSCode

1. 下載數據集文件

建立項目文件夾,下載上述Github項目中的raw_data文件夾,并保存至項目目錄中。

Python根據URL地址下載文件并保存至對應目錄的實現

2. 獲取樣本文件位置

編寫get_doc_path.py,根據根目錄位置,獲取目錄及其子目錄所有數據集文件

import osdef get_file(root_path, all_files={}): ’’’ 遞歸函數,遍歷該文檔目錄和子目錄下的所有文件,獲取其path ’’’ files = os.listdir(root_path) for file in files: if not os.path.isdir(root_path + ’/’ + file): # not a dir all_files[file] = root_path + ’/’ + file else: # is a dir get_file((root_path+’/’+file), all_files) return all_filesif __name__ == ’__main__’: path = ’./raw_data’ print(get_file(path))

3. 下載文件

3.1 讀取url列表并

for filename, path in paths.items(): print(’reading file: {}’.format(filename)) with open(path, ’r’) as f: lines = f.readlines() url_list = [] for line in lines:url_list.append(line.strip(’n’)) print(url_list)

3.2 創建文件夾

foldername = './picture_get_by_url/pic_download/{}'.format(filename.split(’.’)[0])if not os.path.exists(folder_path): print('Selected folder not exist, try to create it.') os.makedirs(folder_path)

3.3 下載圖片

def get_pic_by_url(folder_path, lists): if not os.path.exists(folder_path): print('Selected folder not exist, try to create it.') os.makedirs(folder_path) for url in lists: print('Try downloading file: {}'.format(url)) filename = url.split(’/’)[-1] filepath = folder_path + ’/’ + filename if os.path.exists(filepath): print('File have already exist. skip') else: try:urllib.request.urlretrieve(url, filename=filepath) except Exception as e:print('Error occurred when downloading file, error message:')print(e)

4. 完整源碼

4.1 get_doc_path.py

import osdef get_file(root_path, all_files={}): ’’’ 遞歸函數,遍歷該文檔目錄和子目錄下的所有文件,獲取其path ’’’ files = os.listdir(root_path) for file in files: if not os.path.isdir(root_path + ’/’ + file): # not a dir all_files[file] = root_path + ’/’ + file else: # is a dir get_file((root_path+’/’+file), all_files) return all_filesif __name__ == ’__main__’: path = ’./raw_data’ print(get_file(path))

4.2 get_pic.py

import get_doc_pathimport osimport urllib.requestdef get_pic_by_url(folder_path, lists): if not os.path.exists(folder_path): print('Selected folder not exist, try to create it.') os.makedirs(folder_path) for url in lists: print('Try downloading file: {}'.format(url)) filename = url.split(’/’)[-1] filepath = folder_path + ’/’ + filename if os.path.exists(filepath): print('File have already exist. skip') else: try:urllib.request.urlretrieve(url, filename=filepath) except Exception as e:print('Error occurred when downloading file, error message:')print(e)if __name__ == '__main__': root_path = ’./picture_get_by_url/raw_data’ paths = get_doc_path.get_file(root_path) print(paths) for filename, path in paths.items(): print(’reading file: {}’.format(filename)) with open(path, ’r’) as f: lines = f.readlines() url_list = [] for line in lines:url_list.append(line.strip(’n’)) foldername = './picture_get_by_url/pic_download/{}'.format(filename.split(’.’)[0]) get_pic_by_url(foldername, url_list)

4.3 運行結果

執行get_pic.py當程序意外停止或再次執行時,程序會自動跳過文件夾中已下載的文件,繼續下載未下載的內容

{‘urls_drawings.txt’: ‘./picture_get_by_url/raw_data/drawings/urls_drawings.txt’, ‘urls_hentai.txt’: ‘./picture_get_by_url/raw_data/hentai/urls_hentai.txt’, ‘urls_neutral.txt’: ‘./picture_get_by_url/raw_data/neutral/urls_neutral.txt’, ‘urls_porn.txt’: ‘./picture_get_by_url/raw_data/porn/urls_porn.txt’, ‘urls_sexy.txt’: ‘./picture_get_by_url/raw_data/sexy/urls_sexy.txt’}reading file: urls_drawings.txtTry downloading file: http://41.media.tumblr.com/xxxxxx.jpgTry downloading file: http://41.media.tumblr.com/xxxxxx.jpgTry downloading file: http://ak1.polyvoreimg.com/cgi/img-thing/size/l/tid/xxxxxx.jpgError occurred when downloading file, error message:HTTP Error 502: No data received from server or forwarderTry downloading file: http://akicocotte.weblike.jp/gaugau/xxxxxx.jpgTry downloading file: http://animewriter.files.wordpress.com/2009/01/nagisa-xxxxxx-xxxxxx.jpgTry downloading file: http://cdn.awwni.me/xxxxxx.jpgTry downloading file: http://cdn.awwni.me/xxxxxx.jpgTry downloading file: http://cdn.awwni.me/xxxxxx.jpgTry downloading file: http://cdn.awwni.me/xxxxxx.jpgTry downloading file: http://cdn.awwni.me/xxxxxx.jpgTry downloading file: http://cdn.awwni.me/xxxxxx.jpgTry downloading file: http://cdn.awwni.me/xxxxxx.jpgTry downloading file: http://cdn.awwni.me/xxxxxx.jpgTry downloading file: http://cdn.awwni.me/xxxxxx.jpgTry downloading file: http://cdn.awwni.me/xxxxxx.jpgTry downloading file: http://cdn.awwni.me/xxxxxx.jpgTry downloading file: http://cdn.awwni.me/xxxxxx.jpgTry downloading file: http://cdn.awwni.me/xxxxxx.jpgTry downloading file: http://cdn.awwni.me/xxxxxx.jpgTry downloading file: http://cdn.awwni.me/xxxxxx.jpgTry downloading file: http://cdn.awwni.me/xxxxxx.jpgTry downloading file: http://cdn.awwni.me/xxxxxx.jpg

后注:由于樣本數據集內容的問題,上述地址以xxxxx代替具體地址,案例項目也已經失效,但是方法仍然可以借鑒

20.9.23更新:數據集地址:https://github.com/ZQ-Qi/nsfw_data_scrapper,單純為了學習和實踐本文代碼的可以下載該數據集進行嘗試

到此這篇關于Python根據URL地址下載文件并保存至對應目錄的實現的文章就介紹到這了,更多相關Python URL下載文件內容請搜索好吧啦網以前的文章或繼續瀏覽下面的相關文章希望大家以后多多支持好吧啦網!

標簽: Python 編程
相關文章:
主站蜘蛛池模板: 一级成人毛片免费观看欧美 | 欧美一级毛片无遮无挡 | 91久久99久91天天拍拍 | 国产一区二区三区国产精品 | 亚洲伊人色| 久久精品一区二区三区中文字幕 | 亚洲高清国产一线久久 | 免费国产成人高清在线观看不卡 | 国产午夜免费视频 | 九九九九视频 | 亚洲美女黄色片 | 亚洲男女网站 | 欧美色老头gay | 欧美性色生活免费观看 | 久久久亚洲欧洲日产国码二区 | 欧美视频在线观看免费精品欧美视频 | 日本一区二区三区欧美在线观看 | 亚洲欧美另类在线视频 | www.热| 免费一级特黄欧美大片勹久久网 | 国产aⅴ片 | 日韩在线一区二区三区视频 | 亚洲一区二区三区影院 | 国产精品国产高清国产专区 | 欧美成人免费香蕉 | 爱久久精品国产 | 手机国产日韩高清免费看片 | 日本视频在线免费看 | 看全黄男人和女人视频 | 亚洲免费天堂 | 亚洲综合亚洲综合网成人 | 在线亚洲v日韩v | 99精品视频观看 | 成年人www| 99视频国产精品 | 欧美人交性视频在线香蕉 | 亚洲国产精品线观看不卡 | 亚洲天堂毛片 | 国产精品久久自在自线观看 | 免费成人高清视频 | 中文字幕在线观看一区二区三区 |