1
0
mirror of https://github.com/arsenetar/dupeguru.git synced 2026-01-25 08:01:39 +00:00

Compare commits

..

41 Commits

Author SHA1 Message Date
Virgil Dupras
943a6570d8 Added Utopic Unicorn to the list of supported Ubuntu dists 2014-10-26 12:18:49 -04:00
Virgil Dupras
854a253d9f me v6.8.1 2014-10-26 12:00:54 -04:00
Virgil Dupras
4e477104a6 Use --deep flag when code signing under OS X
It is now required in new versions of OS X that the embedded Python framework is signed separately.
2014-10-18 11:09:18 -04:00
Virgil Dupras
79800bc6ed Added --arch-pkg option to package.py
Otherwise, AUR packages don't work with Arch lookalikes like Manjaro.
2014-10-17 15:58:45 -04:00
Virgil Dupras
6e7b95b2cf se v3.9.1 2014-10-17 15:51:48 -04:00
Virgil Dupras
bf09c4ce8a Nicely wrap PermissionDenied errors on save
In fact, all `OSError`.

ref #266
2014-10-17 15:46:43 -04:00
Virgil Dupras
b4a73771c2 Fix iCCP: known incorrect sRGB profile warnings in stderr
I processed all images through `convert -strip`.

It's still possible, however, to get these error if PE tries to open an
image with an invalid profile.
2014-10-17 15:45:07 -04:00
Virgil Dupras
2166a0996c Added tox configuration
... and fixed pep8 warnings. There's a lot of them that are still
ignored, but that's because it's too much of a step to take at once.
2014-10-13 15:08:59 -04:00
Virgil Dupras
24643a9b5d Updated copyright year to 2014 in Cocoa about boxes
Better late than never.
2014-10-12 13:19:55 -04:00
Virgil Dupras
045051ce06 Fixed formatting in changelog_pe 2014-10-12 10:52:41 -04:00
Virgil Dupras
7c3728ca47 Converted hscommon.jobprogress.qt to Qt5 2014-10-12 10:52:21 -04:00
Virgil Dupras
91be1c7336 pe v2.10.1 2014-10-12 10:47:18 -04:00
Virgil Dupras
162378bb0a Updated hscommon 2014-10-12 10:39:21 -04:00
Virgil Dupras
4e3cad5702 Fixed minor typo 2014-10-12 10:15:07 -04:00
Virgil Dupras
321f8ab406 Catch MemoryError better in PE's block matching algo
fixes #264 (for good this time, hopefully)
2014-10-05 22:22:59 -04:00
Virgil Dupras
5b3d5f5d1c Tweaked the main dev help page to have actual reflinks 2014-10-05 20:12:38 -04:00
Virgil Dupras
372a682610 Catch MemoryError in PE's block matching algo
fixes #264 (hopefully)
2014-10-05 17:13:36 -04:00
Virgil Dupras
44266273bf Included hscommon.jobprogress in the devdocs 2014-10-05 17:12:10 -04:00
Virgil Dupras
ac32305532 Integrated the jobprogress library into hscommon
I have a fix to make in it and it's really silly to pretend that this
lib is of any use to anybody outside HS apps. Bringing it back here will
make things more simple.
2014-10-05 16:31:16 -04:00
Virgil Dupras
87c2fa2573 Updated README which was a bit outdated 2014-10-04 17:01:22 -04:00
Virgil Dupras
db63b63cfd Fix crash in PE when reading some EXIF tags
The crash was caused by ObjP, which crashed when converting `NSDictionary` containing unsupported types.

Updating ObjP to v1.3.1 does the trick.

fixes #263
fixes #265
2014-10-04 16:35:26 -04:00
Virgil Dupras
6725b2bf0f Updated German localisation, by Frank Weber 2014-09-28 13:40:09 -04:00
Virgil Dupras
990e73c383 Catch Spinx SystemExit when building help
In a recent Sphinx release, it started calling `sys.exit()` and that
caused our whole build process to exit prematurely.
2014-09-13 16:05:40 -04:00
Virgil Dupras
9e9e73aa6b qtlib: Fix broken SelectableList
It was still using `.reset()`, which disappeared in Qt5.

Fixes #254.
2014-07-01 08:30:56 -04:00
Virgil Dupras
8434befe1f me v6.8.0 2014-05-11 09:26:55 -04:00
Virgil Dupras
1114ac5613 Fixed debian packaging 2014-05-11 09:11:38 -04:00
Virgil Dupras
f5f29d775c Adapt IPhotoPlistParser to Python 3.4
This also means that Python 3.3 isn't supported anymore for that part.
Updated README accordingly.
2014-05-03 15:12:13 -04:00
Virgil Dupras
ebd7f1b4ce pe v2.10.0 2014-05-03 13:57:00 -04:00
Virgil Dupras
279b7ad10c Fix typo in README 2014-05-03 13:53:16 -04:00
Virgil Dupras
878205fc49 Fix empty ignore List dialog bug in PE
Re-instantiating a new scanner for PE  made the ignore list dialog
target the wrong ignore list. We now only instantiate a scanner once.

Fixes #253
2014-05-03 13:44:38 -04:00
Virgil Dupras
b16df32150 I'm giving PyCharm a try 2014-05-03 13:39:39 -04:00
Virgil Dupras
04b06f7704 Removed the setNativeMenuBar() call under Qt
I put it there to make the menu usable under Ubuntu 13.10, but since
14.04, this line actually brakes it.
2014-05-03 09:34:41 -04:00
Virgil Dupras
c6ea1c62d4 Fixed Windows packaging 2014-04-21 10:00:53 -04:00
Virgil Dupras
6ce0f66601 Fixed debian packaging 2014-04-19 18:32:11 -04:00
Virgil Dupras
ac3a9e3ba8 Removed Qt's "Check for updates"
It only worked on 32bit Windows, and it's gone now.
2014-04-19 18:21:56 -04:00
Virgil Dupras
903d2f9183 Improved arch packaging
No need to bundle a .desktop file with arch source packages anymore.
dupeGuru's source package takes care of that.
2014-04-19 17:50:40 -04:00
Virgil Dupras
ca709a60cf Updated copyright year to 2014 2014-04-19 12:19:11 -04:00
Virgil Dupras
a9b4ce5529 se v3.9.0 2014-04-19 12:17:26 -04:00
Virgil Dupras
9b82ceca67 Updated windows packaging for Qt5
We now only support 64bit Windows.
2014-04-18 13:22:04 -04:00
Virgil Dupras
4c7c279dd2 Avoid crashes on quit under Windows 2014-04-18 10:55:01 -04:00
Virgil Dupras
79db31685e Fixed crash on results double-click
Introduced by the Qt5 move. Looks like passing `None` to
`doubleClicked.emit()` doesn't cut it anymore.
2014-04-18 10:44:59 -04:00
235 changed files with 2394 additions and 2412 deletions

2
.gitignore vendored
View File

@@ -5,6 +5,8 @@
*.pyd *.pyd
*.waf* *.waf*
.lock-waf* .lock-waf*
.idea
.tox
build build
dist dist

View File

@@ -1,4 +1,4 @@
Copyright 2013 Hardcoded Software Inc. (http://www.hardcoded.net) Copyright 2014 Hardcoded Software Inc. (http://www.hardcoded.net)
All rights reserved. All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

View File

@@ -3,7 +3,7 @@
[dupeGuru][dupeguru] is a cross-platform (Linux, OS X, Windows) GUI tool to find duplicate files in [dupeGuru][dupeguru] is a cross-platform (Linux, OS X, Windows) GUI tool to find duplicate files in
a system. It's written mostly in Python 3 and has the peculiarity of using a system. It's written mostly in Python 3 and has the peculiarity of using
[multiple GUI toolkits][cross-toolkit], all using the same core Python code. On OS X, the UI layer [multiple GUI toolkits][cross-toolkit], all using the same core Python code. On OS X, the UI layer
is written in Objective-C and uses Cocoa. On Linux and Windows, it's written in Python and uses Qt4. is written in Objective-C and uses Cocoa. On Linux and Windows, it's written in Python and uses Qt5.
dupeGuru comes in 3 editions (standard, music and picture) which are all buildable from this same dupeGuru comes in 3 editions (standard, music and picture) which are all buildable from this same
source tree. You choose the edition you want to build in a ``configure.py`` flag. source tree. You choose the edition you want to build in a ``configure.py`` flag.
@@ -18,7 +18,7 @@ This folder contains the source for dupeGuru. Its documentation is in ``help``,
* cocoa: UI code for the Cocoa toolkit. It's Objective-C code. * cocoa: UI code for the Cocoa toolkit. It's Objective-C code.
* qt: UI code for the Qt toolkit. It's written in Python and uses PyQt. * qt: UI code for the Qt toolkit. It's written in Python and uses PyQt.
* images: Images used by the different UI codebases. * images: Images used by the different UI codebases.
* debian: Skeleton files required to create a .deb package * pkg: Skeleton files required to create different packages
* help: Help document, written for Sphinx. * help: Help document, written for Sphinx.
* locale: .po files for localisation. * locale: .po files for localisation.
@@ -47,11 +47,11 @@ Prerequisites are installed through `pip`. However, some of them are not "pip in
to be installed manually. to be installed manually.
* All systems: [Python 3.3+][python] and [setuptools][setuptools] * All systems: [Python 3.3+][python] and [setuptools][setuptools]
* Mac OS X: The last XCode to have the 10.6 SDK included. * Mac OS X: The last XCode to have the 10.7 SDK included. Python 3.4+.
* Windows: Visual Studio 2010, [PyQt 5.0+][pyqt], [cx_Freeze][cxfreeze] and * Windows: Visual Studio 2010, [PyQt 5.0+][pyqt], [cx_Freeze][cxfreeze] and
[Advanced Installer][advinst] (you only need the last two if you want to create an installer) [Advanced Installer][advinst] (you only need the last two if you want to create an installer)
On Ubuntu (13.10+), the apt-get command to install all pre-requisites is: On Ubuntu (14.04+), the apt-get command to install all pre-requisites is:
$ apt-get install python3-dev python3-pyqt5 pyqt5-dev-tools $ apt-get install python3-dev python3-pyqt5 pyqt5-dev-tools
@@ -63,12 +63,12 @@ On Arch, it's:
Use Python's built-in `pyvenv` to create a virtual environment in which we're going to install our. Use Python's built-in `pyvenv` to create a virtual environment in which we're going to install our.
Python-related dependencies. `pyvenv` is built-in Python but, unlike its `virtualenv` predecessor, Python-related dependencies. `pyvenv` is built-in Python but, unlike its `virtualenv` predecessor,
it doesn't install setuptools and pip, so it has to be installed manually: it doesn't install setuptools and pip (unless you use Python 3.4+), so it has to be installed
manually:
$ pyvenv --system-site-packages env $ pyvenv --system-site-packages env
$ source env/bin/activate $ source env/bin/activate
$ wget https://bitbucket.org/pypa/setuptools/raw/bootstrap/ez_setup.py -O - | python $ python get-pip.py
$ easy_install pip
Then, you can install pip requirements in your virtualenv: Then, you can install pip requirements in your virtualenv:
@@ -96,3 +96,4 @@ You can also package dupeGuru into an installable package with:
[pyqt]: http://www.riverbankcomputing.com [pyqt]: http://www.riverbankcomputing.com
[cxfreeze]: http://cx-freeze.sourceforge.net/ [cxfreeze]: http://cx-freeze.sourceforge.net/
[advinst]: http://www.advancedinstaller.com [advinst]: http://www.advancedinstaller.com

134
build.py
View File

@@ -1,9 +1,9 @@
# Created By: Virgil Dupras # Created By: Virgil Dupras
# Created On: 2009-12-30 # Created On: 2009-12-30
# Copyright 2013 Hardcoded Software (http://www.hardcoded.net) # Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
# #
# This software is licensed under the "BSD" License as described in the "LICENSE" file, # This software is licensed under the "BSD" License as described in the "LICENSE" file,
# which should be included with this package. The terms are also available at # which should be included with this package. The terms are also available at
# http://www.hardcoded.net/licenses/bsd_license # http://www.hardcoded.net/licenses/bsd_license
import sys import sys
@@ -18,10 +18,12 @@ import compileall
from setuptools import setup, Extension from setuptools import setup, Extension
from hscommon import sphinxgen from hscommon import sphinxgen
from hscommon.build import (add_to_pythonpath, print_and_do, copy_packages, filereplace, from hscommon.build import (
add_to_pythonpath, print_and_do, copy_packages, filereplace,
get_module_version, move_all, copy_all, OSXAppStructure, get_module_version, move_all, copy_all, OSXAppStructure,
build_cocoalib_xibless, fix_qt_resource_file, build_cocoa_ext, copy_embeddable_python_dylib, build_cocoalib_xibless, fix_qt_resource_file, build_cocoa_ext, copy_embeddable_python_dylib,
collect_stdlib_dependencies, copy) collect_stdlib_dependencies, copy
)
from hscommon import loc from hscommon import loc
from hscommon.plat import ISOSX, ISLINUX from hscommon.plat import ISOSX, ISLINUX
from hscommon.util import ensure_folder, delete_files_with_pattern from hscommon.util import ensure_folder, delete_files_with_pattern
@@ -29,24 +31,42 @@ from hscommon.util import ensure_folder, delete_files_with_pattern
def parse_args(): def parse_args():
usage = "usage: %prog [options]" usage = "usage: %prog [options]"
parser = OptionParser(usage=usage) parser = OptionParser(usage=usage)
parser.add_option('--clean', action='store_true', dest='clean', parser.add_option(
help="Clean build folder before building") '--clean', action='store_true', dest='clean',
parser.add_option('--doc', action='store_true', dest='doc', help="Clean build folder before building"
help="Build only the help file") )
parser.add_option('--loc', action='store_true', dest='loc', parser.add_option(
help="Build only localization") '--doc', action='store_true', dest='doc',
parser.add_option('--cocoa-ext', action='store_true', dest='cocoa_ext', help="Build only the help file"
help="Build only Cocoa extensions") )
parser.add_option('--cocoa-compile', action='store_true', dest='cocoa_compile', parser.add_option(
help="Build only Cocoa executable") '--loc', action='store_true', dest='loc',
parser.add_option('--xibless', action='store_true', dest='xibless', help="Build only localization"
help="Build only xibless UIs") )
parser.add_option('--updatepot', action='store_true', dest='updatepot', parser.add_option(
help="Generate .pot files from source code.") '--cocoa-ext', action='store_true', dest='cocoa_ext',
parser.add_option('--mergepot', action='store_true', dest='mergepot', help="Build only Cocoa extensions"
help="Update all .po files based on .pot files.") )
parser.add_option('--normpo', action='store_true', dest='normpo', parser.add_option(
help="Normalize all PO files (do this before commit).") '--cocoa-compile', action='store_true', dest='cocoa_compile',
help="Build only Cocoa executable"
)
parser.add_option(
'--xibless', action='store_true', dest='xibless',
help="Build only xibless UIs"
)
parser.add_option(
'--updatepot', action='store_true', dest='updatepot',
help="Generate .pot files from source code."
)
parser.add_option(
'--mergepot', action='store_true', dest='mergepot',
help="Update all .po files based on .pot files."
)
parser.add_option(
'--normpo', action='store_true', dest='normpo',
help="Normalize all PO files (do this before commit)."
)
(options, args) = parser.parse_args() (options, args) = parser.parse_args()
return options return options
@@ -75,12 +95,20 @@ def build_xibless(edition, dest='cocoa/autogen'):
('preferences_panel.py', 'PreferencesPanel_UI'), ('preferences_panel.py', 'PreferencesPanel_UI'),
] ]
for srcname, dstname in FNPAIRS: for srcname, dstname in FNPAIRS:
xibless.generate(op.join('cocoa', 'base', 'ui', srcname), op.join(dest, dstname), xibless.generate(
localizationTable='Localizable', args={'edition': edition}) op.join('cocoa', 'base', 'ui', srcname), op.join(dest, dstname),
localizationTable='Localizable', args={'edition': edition}
)
if edition == 'pe': if edition == 'pe':
xibless.generate('cocoa/pe/ui/details_panel.py', op.join(dest, 'DetailsPanel_UI'), localizationTable='Localizable') xibless.generate(
'cocoa/pe/ui/details_panel.py', op.join(dest, 'DetailsPanel_UI'),
localizationTable='Localizable'
)
else: else:
xibless.generate('cocoa/base/ui/details_panel.py', op.join(dest, 'DetailsPanel_UI'), localizationTable='Localizable') xibless.generate(
'cocoa/base/ui/details_panel.py', op.join(dest, 'DetailsPanel_UI'),
localizationTable='Localizable'
)
def build_cocoa(edition, dev): def build_cocoa(edition, dev):
print("Creating OS X app structure") print("Creating OS X app structure")
@@ -110,15 +138,16 @@ def build_cocoa(edition, dev):
'me': ['core_me'] + appscript_pkgs + ['hsaudiotag'], 'me': ['core_me'] + appscript_pkgs + ['hsaudiotag'],
'pe': ['core_pe'] + appscript_pkgs, 'pe': ['core_pe'] + appscript_pkgs,
}[edition] }[edition]
tocopy = ['core', 'hscommon', 'cocoa/inter', 'cocoalib/cocoa', 'jobprogress', 'objp', tocopy = [
'send2trash'] + specific_packages 'core', 'hscommon', 'cocoa/inter', 'cocoalib/cocoa', 'objp', 'send2trash'
] + specific_packages
copy_packages(tocopy, pydep_folder, create_links=dev) copy_packages(tocopy, pydep_folder, create_links=dev)
sys.path.insert(0, 'build') sys.path.insert(0, 'build')
extra_deps = None extra_deps = None
if edition == 'pe': if edition == 'pe':
# ModuleFinder can't seem to correctly detect the multiprocessing dependency, so we have # ModuleFinder can't seem to correctly detect the multiprocessing dependency, so we have
# to manually specify it. # to manually specify it.
extra_deps=['multiprocessing'] extra_deps = ['multiprocessing']
collect_stdlib_dependencies('build/dg_cocoa.py', pydep_folder, extra_deps=extra_deps) collect_stdlib_dependencies('build/dg_cocoa.py', pydep_folder, extra_deps=extra_deps)
del sys.path[0] del sys.path[0]
# Views are not referenced by python code, so they're not found by the collector. # Views are not referenced by python code, so they're not found by the collector.
@@ -224,8 +253,10 @@ def build_updatepot():
os.remove(cocoalib_pot) os.remove(cocoalib_pot)
loc.strings2pot(op.join('cocoalib', 'en.lproj', 'cocoalib.strings'), cocoalib_pot) loc.strings2pot(op.join('cocoalib', 'en.lproj', 'cocoalib.strings'), cocoalib_pot)
print("Enhancing ui.pot with Cocoa's strings files") print("Enhancing ui.pot with Cocoa's strings files")
loc.strings2pot(op.join('cocoa', 'base', 'en.lproj', 'Localizable.strings'), loc.strings2pot(
op.join('locale', 'ui.pot')) op.join('cocoa', 'base', 'en.lproj', 'Localizable.strings'),
op.join('locale', 'ui.pot')
)
def build_mergepot(): def build_mergepot():
print("Updating .po files using .pot files") print("Updating .po files using .pot files")
@@ -242,11 +273,15 @@ def build_cocoa_proxy_module():
print("Building Cocoa Proxy") print("Building Cocoa Proxy")
import objp.p2o import objp.p2o
objp.p2o.generate_python_proxy_code('cocoalib/cocoa/CocoaProxy.h', 'build/CocoaProxy.m') objp.p2o.generate_python_proxy_code('cocoalib/cocoa/CocoaProxy.h', 'build/CocoaProxy.m')
build_cocoa_ext("CocoaProxy", 'cocoalib/cocoa', build_cocoa_ext(
['cocoalib/cocoa/CocoaProxy.m', 'build/CocoaProxy.m', 'build/ObjP.m', "CocoaProxy", 'cocoalib/cocoa',
'cocoalib/HSErrorReportWindow.m', 'cocoa/autogen/HSErrorReportWindow_UI.m'], [
'cocoalib/cocoa/CocoaProxy.m', 'build/CocoaProxy.m', 'build/ObjP.m',
'cocoalib/HSErrorReportWindow.m', 'cocoa/autogen/HSErrorReportWindow_UI.m'
],
['AppKit', 'CoreServices'], ['AppKit', 'CoreServices'],
['cocoalib', 'cocoa/autogen']) ['cocoalib', 'cocoa/autogen']
)
def build_cocoa_bridging_interfaces(edition): def build_cocoa_bridging_interfaces(edition):
print("Building Cocoa Bridging Interfaces") print("Building Cocoa Bridging Interfaces")
@@ -254,9 +289,11 @@ def build_cocoa_bridging_interfaces(edition):
import objp.p2o import objp.p2o
add_to_pythonpath('cocoa') add_to_pythonpath('cocoa')
add_to_pythonpath('cocoalib') add_to_pythonpath('cocoalib')
from cocoa.inter import (PyGUIObject, GUIObjectView, PyColumns, ColumnsView, PyOutline, from cocoa.inter import (
PyGUIObject, GUIObjectView, PyColumns, ColumnsView, PyOutline,
OutlineView, PySelectableList, SelectableListView, PyTable, TableView, PyBaseApp, OutlineView, PySelectableList, SelectableListView, PyTable, TableView, PyBaseApp,
PyTextField, ProgressWindowView, PyProgressWindow) PyTextField, ProgressWindowView, PyProgressWindow
)
from inter.deletion_options import PyDeletionOptions, DeletionOptionsView from inter.deletion_options import PyDeletionOptions, DeletionOptionsView
from inter.details_panel import PyDetailsPanel, DetailsPanelView from inter.details_panel import PyDetailsPanel, DetailsPanelView
from inter.directory_outline import PyDirectoryOutline, DirectoryOutlineView from inter.directory_outline import PyDirectoryOutline, DirectoryOutlineView
@@ -268,16 +305,20 @@ def build_cocoa_bridging_interfaces(edition):
from inter.stats_label import PyStatsLabel, StatsLabelView from inter.stats_label import PyStatsLabel, StatsLabelView
from inter.app import PyDupeGuruBase, DupeGuruView from inter.app import PyDupeGuruBase, DupeGuruView
appmod = importlib.import_module('inter.app_{}'.format(edition)) appmod = importlib.import_module('inter.app_{}'.format(edition))
allclasses = [PyGUIObject, PyColumns, PyOutline, PySelectableList, PyTable, PyBaseApp, allclasses = [
PyGUIObject, PyColumns, PyOutline, PySelectableList, PyTable, PyBaseApp,
PyDetailsPanel, PyDirectoryOutline, PyPrioritizeDialog, PyPrioritizeList, PyProblemDialog, PyDetailsPanel, PyDirectoryOutline, PyPrioritizeDialog, PyPrioritizeList, PyProblemDialog,
PyIgnoreListDialog, PyDeletionOptions, PyResultTable, PyStatsLabel, PyDupeGuruBase, PyIgnoreListDialog, PyDeletionOptions, PyResultTable, PyStatsLabel, PyDupeGuruBase,
PyTextField, PyProgressWindow, appmod.PyDupeGuru] PyTextField, PyProgressWindow, appmod.PyDupeGuru
]
for class_ in allclasses: for class_ in allclasses:
objp.o2p.generate_objc_code(class_, 'cocoa/autogen', inherit=True) objp.o2p.generate_objc_code(class_, 'cocoa/autogen', inherit=True)
allclasses = [GUIObjectView, ColumnsView, OutlineView, SelectableListView, TableView, allclasses = [
GUIObjectView, ColumnsView, OutlineView, SelectableListView, TableView,
DetailsPanelView, DirectoryOutlineView, PrioritizeDialogView, PrioritizeListView, DetailsPanelView, DirectoryOutlineView, PrioritizeDialogView, PrioritizeListView,
IgnoreListDialogView, DeletionOptionsView, ResultTableView, StatsLabelView, IgnoreListDialogView, DeletionOptionsView, ResultTableView, StatsLabelView,
ProgressWindowView, DupeGuruView] ProgressWindowView, DupeGuruView
]
clsspecs = [objp.o2p.spec_from_python_class(class_) for class_ in allclasses] clsspecs = [objp.o2p.spec_from_python_class(class_) for class_ in allclasses]
objp.p2o.generate_python_proxy_code_from_clsspec(clsspecs, 'build/CocoaViews.m') objp.p2o.generate_python_proxy_code_from_clsspec(clsspecs, 'build/CocoaViews.m')
build_cocoa_ext('CocoaViews', 'cocoa/inter', ['build/CocoaViews.m', 'build/ObjP.m']) build_cocoa_ext('CocoaViews', 'cocoa/inter', ['build/CocoaViews.m', 'build/ObjP.m'])
@@ -296,11 +337,12 @@ def build_pe_modules(ui):
extra_link_args=[ extra_link_args=[
"-framework", "CoreFoundation", "-framework", "CoreFoundation",
"-framework", "Foundation", "-framework", "Foundation",
"-framework", "ApplicationServices",] "-framework", "ApplicationServices",
]
)) ))
setup( setup(
script_args = ['build_ext', '--inplace'], script_args=['build_ext', '--inplace'],
ext_modules = exts, ext_modules=exts,
) )
move_all('_block_qt*', op.join('qt', 'pe')) move_all('_block_qt*', op.join('qt', 'pe'))
move_all('_block*', 'core_pe') move_all('_block*', 'core_pe')

View File

@@ -1,5 +1,5 @@
/* /*
Copyright 2013 Hardcoded Software (http://www.hardcoded.net) Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
This software is licensed under the "BSD" License as described in the "LICENSE" file, This software is licensed under the "BSD" License as described in the "LICENSE" file,
which should be included with this package. The terms are also available at which should be included with this package. The terms are also available at

View File

@@ -1,5 +1,5 @@
/* /*
Copyright 2013 Hardcoded Software (http://www.hardcoded.net) Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
This software is licensed under the "BSD" License as described in the "LICENSE" file, This software is licensed under the "BSD" License as described in the "LICENSE" file,
which should be included with this package. The terms are also available at which should be included with this package. The terms are also available at

View File

@@ -1,5 +1,5 @@
/* /*
Copyright 2013 Hardcoded Software (http://www.hardcoded.net) Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
This software is licensed under the "BSD" License as described in the "LICENSE" file, This software is licensed under the "BSD" License as described in the "LICENSE" file,
which should be included with this package. The terms are also available at which should be included with this package. The terms are also available at

View File

@@ -1,5 +1,5 @@
/* /*
Copyright 2013 Hardcoded Software (http://www.hardcoded.net) Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
This software is licensed under the "BSD" License as described in the "LICENSE" file, This software is licensed under the "BSD" License as described in the "LICENSE" file,
which should be included with this package. The terms are also available at which should be included with this package. The terms are also available at

View File

@@ -1,5 +1,5 @@
/* /*
Copyright 2013 Hardcoded Software (http://www.hardcoded.net) Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
This software is licensed under the "BSD" License as described in the "LICENSE" file, This software is licensed under the "BSD" License as described in the "LICENSE" file,
which should be included with this package. The terms are also available at which should be included with this package. The terms are also available at

View File

@@ -1,5 +1,5 @@
/* /*
Copyright 2013 Hardcoded Software (http://www.hardcoded.net) Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
This software is licensed under the "BSD" License as described in the "LICENSE" file, This software is licensed under the "BSD" License as described in the "LICENSE" file,
which should be included with this package. The terms are also available at which should be included with this package. The terms are also available at

View File

@@ -1,5 +1,5 @@
/* /*
Copyright 2013 Hardcoded Software (http://www.hardcoded.net) Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
This software is licensed under the "BSD" License as described in the "LICENSE" file, This software is licensed under the "BSD" License as described in the "LICENSE" file,
which should be included with this package. The terms are also available at which should be included with this package. The terms are also available at

View File

@@ -1,5 +1,5 @@
/* /*
Copyright 2013 Hardcoded Software (http://www.hardcoded.net) Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
This software is licensed under the "BSD" License as described in the "LICENSE" file, This software is licensed under the "BSD" License as described in the "LICENSE" file,
which should be included with this package. The terms are also available at which should be included with this package. The terms are also available at

View File

@@ -1,5 +1,5 @@
/* /*
Copyright 2013 Hardcoded Software (http://www.hardcoded.net) Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
This software is licensed under the "BSD" License as described in the "LICENSE" file, This software is licensed under the "BSD" License as described in the "LICENSE" file,
which should be included with this package. The terms are also available at which should be included with this package. The terms are also available at

View File

@@ -1,5 +1,5 @@
/* /*
Copyright 2013 Hardcoded Software (http://www.hardcoded.net) Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
This software is licensed under the "BSD" License as described in the "LICENSE" file, This software is licensed under the "BSD" License as described in the "LICENSE" file,
which should be included with this package. The terms are also available at which should be included with this package. The terms are also available at

View File

@@ -1,5 +1,5 @@
/* /*
Copyright 2013 Hardcoded Software (http://www.hardcoded.net) Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
This software is licensed under the "BSD" License as described in the "LICENSE" file, This software is licensed under the "BSD" License as described in the "LICENSE" file,
which should be included with this package. The terms are also available at which should be included with this package. The terms are also available at

View File

@@ -1,5 +1,5 @@
/* /*
Copyright 2013 Hardcoded Software (http://www.hardcoded.net) Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
This software is licensed under the "BSD" License as described in the "LICENSE" file, This software is licensed under the "BSD" License as described in the "LICENSE" file,
which should be included with this package. The terms are also available at which should be included with this package. The terms are also available at

View File

@@ -1,5 +1,5 @@
/* /*
Copyright 2013 Hardcoded Software (http://www.hardcoded.net) Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
This software is licensed under the "BSD" License as described in the "LICENSE" file, This software is licensed under the "BSD" License as described in the "LICENSE" file,
which should be included with this package. The terms are also available at which should be included with this package. The terms are also available at

View File

@@ -1,5 +1,5 @@
/* /*
Copyright 2013 Hardcoded Software (http://www.hardcoded.net) Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
This software is licensed under the "BSD" License as described in the "LICENSE" file, This software is licensed under the "BSD" License as described in the "LICENSE" file,
which should be included with this package. The terms are also available at which should be included with this package. The terms are also available at

View File

@@ -1,5 +1,5 @@
/* /*
Copyright 2013 Hardcoded Software (http://www.hardcoded.net) Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
This software is licensed under the "BSD" License as described in the "LICENSE" file, This software is licensed under the "BSD" License as described in the "LICENSE" file,
which should be included with this package. The terms are also available at which should be included with this package. The terms are also available at

View File

@@ -1,5 +1,5 @@
/* /*
Copyright 2013 Hardcoded Software (http://www.hardcoded.net) Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
This software is licensed under the "BSD" License as described in the "LICENSE" file, This software is licensed under the "BSD" License as described in the "LICENSE" file,
which should be included with this package. The terms are also available at which should be included with this package. The terms are also available at

View File

@@ -1,5 +1,5 @@
/* /*
Copyright 2013 Hardcoded Software (http://www.hardcoded.net) Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
This software is licensed under the "BSD" License as described in the "LICENSE" file, This software is licensed under the "BSD" License as described in the "LICENSE" file,
which should be included with this package. The terms are also available at which should be included with this package. The terms are also available at

View File

@@ -1,5 +1,5 @@
/* /*
Copyright 2013 Hardcoded Software (http://www.hardcoded.net) Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
This software is licensed under the "BSD" License as described in the "LICENSE" file, This software is licensed under the "BSD" License as described in the "LICENSE" file,
which should be included with this package. The terms are also available at which should be included with this package. The terms are also available at

View File

@@ -1,5 +1,5 @@
/* /*
Copyright 2013 Hardcoded Software (http://www.hardcoded.net) Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
This software is licensed under the "BSD" License as described in the "LICENSE" file, This software is licensed under the "BSD" License as described in the "LICENSE" file,
which should be included with this package. The terms are also available at which should be included with this package. The terms are also available at

View File

@@ -1,5 +1,5 @@
/* /*
Copyright 2013 Hardcoded Software (http://www.hardcoded.net) Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
This software is licensed under the "BSD" License as described in the "LICENSE" file, This software is licensed under the "BSD" License as described in the "LICENSE" file,
which should be included with this package. The terms are also available at which should be included with this package. The terms are also available at

View File

@@ -1,5 +1,5 @@
/* /*
Copyright 2013 Hardcoded Software (http://www.hardcoded.net) Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
This software is licensed under the "BSD" License as described in the "LICENSE" file, This software is licensed under the "BSD" License as described in the "LICENSE" file,
which should be included with this package. The terms are also available at which should be included with this package. The terms are also available at

View File

@@ -1,5 +1,5 @@
/* /*
Copyright 2013 Hardcoded Software (http://www.hardcoded.net) Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
This software is licensed under the "BSD" License as described in the "LICENSE" file, This software is licensed under the "BSD" License as described in the "LICENSE" file,
which should be included with this package. The terms are also available at which should be included with this package. The terms are also available at

View File

@@ -1,5 +1,5 @@
/* /*
Copyright 2013 Hardcoded Software (http://www.hardcoded.net) Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
This software is licensed under the "BSD" License as described in the "LICENSE" file, This software is licensed under the "BSD" License as described in the "LICENSE" file,
which should be included with this package. The terms are also available at which should be included with this package. The terms are also available at

View File

@@ -1,5 +1,5 @@
/* /*
Copyright 2013 Hardcoded Software (http://www.hardcoded.net) Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
This software is licensed under the "BSD" License as described in the "LICENSE" file, This software is licensed under the "BSD" License as described in the "LICENSE" file,
which should be included with this package. The terms are also available at which should be included with this package. The terms are also available at

View File

@@ -1,5 +1,5 @@
/* /*
Copyright 2013 Hardcoded Software (http://www.hardcoded.net) Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
This software is licensed under the "BSD" License as described in the "LICENSE" file, This software is licensed under the "BSD" License as described in the "LICENSE" file,
which should be included with this package. The terms are also available at which should be included with this package. The terms are also available at

View File

@@ -1,5 +1,5 @@
/* /*
Copyright 2013 Hardcoded Software (http://www.hardcoded.net) Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
This software is licensed under the "BSD" License as described in the "LICENSE" file, This software is licensed under the "BSD" License as described in the "LICENSE" file,
which should be included with this package. The terms are also available at which should be included with this package. The terms are also available at

View File

@@ -12,7 +12,7 @@ dialogHeights = {
scanTypeNames = { scanTypeNames = {
'se': ["Filename", "Content", "Folders"], 'se': ["Filename", "Content", "Folders"],
'me': ["Filename", "Filename - Fields", "Filename - Fields (No Order)", "Tags", "Content", "Audio Content"], 'me': ["Filename", "Filename - Fields", "Filename - Fields (No Order)", "Tags", "Content", "Audio Content"],
'pe': ["Contents", "EXIF Timestamp", "Trigger-happy mode"], 'pe': ["Contents", "EXIF Timestamp"],
} }
result = Window(410, dialogHeights[edition], dialogTitles[edition]) result = Window(410, dialogHeights[edition], dialogTitles[edition])

View File

@@ -1,6 +1,6 @@
# Created By: Virgil Dupras # Created By: Virgil Dupras
# Created On: 2006/11/16 # Created On: 2006/11/16
# Copyright 2013 Hardcoded Software (http://www.hardcoded.net) # Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
# #
# This software is licensed under the "BSD" License as described in the "LICENSE" file, # This software is licensed under the "BSD" License as described in the "LICENSE" file,
# which should be included with this package. The terms are also available at # which should be included with this package. The terms are also available at

View File

@@ -1,6 +1,6 @@
# Created By: Virgil Dupras # Created By: Virgil Dupras
# Created On: 2006/11/13 # Created On: 2006/11/13
# Copyright 2013 Hardcoded Software (http://www.hardcoded.net) # Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
# #
# This software is licensed under the "BSD" License as described in the "LICENSE" file, # This software is licensed under the "BSD" License as described in the "LICENSE" file,
# which should be included with this package. The terms are also available at # which should be included with this package. The terms are also available at
@@ -331,7 +331,6 @@ class PyDupeGuru(PyDupeGuruBase):
self.model.scanner.scan_type = [ self.model.scanner.scan_type = [
ScanType.FuzzyBlock, ScanType.FuzzyBlock,
ScanType.ExifTimestamp, ScanType.ExifTimestamp,
ScanType.TriggerHappyMode,
][scan_type] ][scan_type]
except IndexError: except IndexError:
pass pass

View File

@@ -1,6 +1,6 @@
# Created By: Virgil Dupras # Created By: Virgil Dupras
# Created On: 2009-05-24 # Created On: 2009-05-24
# Copyright 2013 Hardcoded Software (http://www.hardcoded.net) # Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
# #
# This software is licensed under the "BSD" License as described in the "LICENSE" file, # This software is licensed under the "BSD" License as described in the "LICENSE" file,
# which should be included with this package. The terms are also available at # which should be included with this package. The terms are also available at

View File

@@ -1,5 +1,5 @@
# Created On: 2012-05-30 # Created On: 2012-05-30
# Copyright 2013 Hardcoded Software (http://www.hardcoded.net) # Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
# #
# This software is licensed under the "BSD" License as described in the "LICENSE" file, # This software is licensed under the "BSD" License as described in the "LICENSE" file,
# which should be included with this package. The terms are also available at # which should be included with this package. The terms are also available at

View File

@@ -1,5 +1,5 @@
/* /*
Copyright 2013 Hardcoded Software (http://www.hardcoded.net) Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
This software is licensed under the "BSD" License as described in the "LICENSE" file, This software is licensed under the "BSD" License as described in the "LICENSE" file,
which should be included with this package. The terms are also available at which should be included with this package. The terms are also available at

View File

@@ -1,5 +1,5 @@
/* /*
Copyright 2013 Hardcoded Software (http://www.hardcoded.net) Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
This software is licensed under the "BSD" License as described in the "LICENSE" file, This software is licensed under the "BSD" License as described in the "LICENSE" file,
which should be included with this package. The terms are also available at which should be included with this package. The terms are also available at

View File

@@ -1,5 +1,5 @@
/* /*
Copyright 2013 Hardcoded Software (http://www.hardcoded.net) Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
This software is licensed under the "BSD" License as described in the "LICENSE" file, This software is licensed under the "BSD" License as described in the "LICENSE" file,
which should be included with this package. The terms are also available at which should be included with this package. The terms are also available at

View File

@@ -1,5 +1,5 @@
/* /*
Copyright 2013 Hardcoded Software (http://www.hardcoded.net) Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
This software is licensed under the "BSD" License as described in the "LICENSE" file, This software is licensed under the "BSD" License as described in the "LICENSE" file,
which should be included with this package. The terms are also available at which should be included with this package. The terms are also available at

View File

@@ -1,5 +1,5 @@
/* /*
Copyright 2013 Hardcoded Software (http://www.hardcoded.net) Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
This software is licensed under the "BSD" License as described in the "LICENSE" file, This software is licensed under the "BSD" License as described in the "LICENSE" file,
which should be included with this package. The terms are also available at which should be included with this package. The terms are also available at

View File

@@ -1,5 +1,5 @@
/* /*
Copyright 2013 Hardcoded Software (http://www.hardcoded.net) Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
This software is licensed under the "BSD" License as described in the "LICENSE" file, This software is licensed under the "BSD" License as described in the "LICENSE" file,
which should be included with this package. The terms are also available at which should be included with this package. The terms are also available at

View File

@@ -1,5 +1,5 @@
/* /*
Copyright 2013 Hardcoded Software (http://www.hardcoded.net) Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
This software is licensed under the "BSD" License as described in the "LICENSE" file, This software is licensed under the "BSD" License as described in the "LICENSE" file,
which should be included with this package. The terms are also available at which should be included with this package. The terms are also available at

View File

@@ -31,7 +31,7 @@
<key>NSPrincipalClass</key> <key>NSPrincipalClass</key>
<string>NSApplication</string> <string>NSApplication</string>
<key>NSHumanReadableCopyright</key> <key>NSHumanReadableCopyright</key>
<string>© Hardcoded Software, 2013</string> <string>© Hardcoded Software, 2014</string>
<key>SUFeedURL</key> <key>SUFeedURL</key>
<string>http://www.hardcoded.net/updates/dupeguru_me.appcast</string> <string>http://www.hardcoded.net/updates/dupeguru_me.appcast</string>
<key>SUPublicDSAKeyFile</key> <key>SUPublicDSAKeyFile</key>

View File

@@ -1,5 +1,5 @@
/* /*
Copyright 2013 Hardcoded Software (http://www.hardcoded.net) Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
This software is licensed under the "BSD" License as described in the "LICENSE" file, This software is licensed under the "BSD" License as described in the "LICENSE" file,
which should be included with this package. The terms are also available at which should be included with this package. The terms are also available at

View File

@@ -1,5 +1,5 @@
/* /*
Copyright 2013 Hardcoded Software (http://www.hardcoded.net) Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
This software is licensed under the "BSD" License as described in the "LICENSE" file, This software is licensed under the "BSD" License as described in the "LICENSE" file,
which should be included with this package. The terms are also available at which should be included with this package. The terms are also available at

View File

@@ -1,4 +1,4 @@
# Copyright 2013 Hardcoded Software (http://www.hardcoded.net) # Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
# #
# This software is licensed under the "BSD" License as described in the "LICENSE" file, # This software is licensed under the "BSD" License as described in the "LICENSE" file,
# which should be included with this package. The terms are also available at # which should be included with this package. The terms are also available at

View File

@@ -1,5 +1,5 @@
/* /*
Copyright 2013 Hardcoded Software (http://www.hardcoded.net) Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
This software is licensed under the "BSD" License as described in the "LICENSE" file, This software is licensed under the "BSD" License as described in the "LICENSE" file,
which should be included with this package. The terms are also available at which should be included with this package. The terms are also available at

View File

@@ -1,5 +1,5 @@
/* /*
Copyright 2013 Hardcoded Software (http://www.hardcoded.net) Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
This software is licensed under the "BSD" License as described in the "LICENSE" file, This software is licensed under the "BSD" License as described in the "LICENSE" file,
which should be included with this package. The terms are also available at which should be included with this package. The terms are also available at

View File

@@ -1,5 +1,5 @@
/* /*
Copyright 2013 Hardcoded Software (http://www.hardcoded.net) Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
This software is licensed under the "BSD" License as described in the "LICENSE" file, This software is licensed under the "BSD" License as described in the "LICENSE" file,
which should be included with this package. The terms are also available at which should be included with this package. The terms are also available at

View File

@@ -1,5 +1,5 @@
/* /*
Copyright 2013 Hardcoded Software (http://www.hardcoded.net) Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
This software is licensed under the "BSD" License as described in the "LICENSE" file, This software is licensed under the "BSD" License as described in the "LICENSE" file,
which should be included with this package. The terms are also available at which should be included with this package. The terms are also available at

View File

@@ -1,5 +1,5 @@
/* /*
Copyright 2013 Hardcoded Software (http://www.hardcoded.net) Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
This software is licensed under the "BSD" License as described in the "LICENSE" file, This software is licensed under the "BSD" License as described in the "LICENSE" file,
which should be included with this package. The terms are also available at which should be included with this package. The terms are also available at

View File

@@ -1,5 +1,5 @@
/* /*
Copyright 2013 Hardcoded Software (http://www.hardcoded.net) Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
This software is licensed under the "BSD" License as described in the "LICENSE" file, This software is licensed under the "BSD" License as described in the "LICENSE" file,
which should be included with this package. The terms are also available at which should be included with this package. The terms are also available at

View File

@@ -1,5 +1,5 @@
/* /*
Copyright 2013 Hardcoded Software (http://www.hardcoded.net) Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
This software is licensed under the "BSD" License as described in the "LICENSE" file, This software is licensed under the "BSD" License as described in the "LICENSE" file,
which should be included with this package. The terms are also available at which should be included with this package. The terms are also available at

View File

@@ -31,7 +31,7 @@
<key>NSPrincipalClass</key> <key>NSPrincipalClass</key>
<string>NSApplication</string> <string>NSApplication</string>
<key>NSHumanReadableCopyright</key> <key>NSHumanReadableCopyright</key>
<string>© Hardcoded Software, 2013</string> <string>© Hardcoded Software, 2014</string>
<key>SUFeedURL</key> <key>SUFeedURL</key>
<string>http://www.hardcoded.net/updates/dupeguru_pe.appcast</string> <string>http://www.hardcoded.net/updates/dupeguru_pe.appcast</string>
<key>SUPublicDSAKeyFile</key> <key>SUPublicDSAKeyFile</key>

View File

@@ -1,5 +1,5 @@
/* /*
Copyright 2013 Hardcoded Software (http://www.hardcoded.net) Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
This software is licensed under the "BSD" License as described in the "LICENSE" file, This software is licensed under the "BSD" License as described in the "LICENSE" file,
which should be included with this package. The terms are also available at which should be included with this package. The terms are also available at

View File

@@ -1,5 +1,5 @@
/* /*
Copyright 2013 Hardcoded Software (http://www.hardcoded.net) Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
This software is licensed under the "BSD" License as described in the "LICENSE" file, This software is licensed under the "BSD" License as described in the "LICENSE" file,
which should be included with this package. The terms are also available at which should be included with this package. The terms are also available at

View File

@@ -1,4 +1,4 @@
# Copyright 2013 Hardcoded Software (http://www.hardcoded.net) # Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
# #
# This software is licensed under the "BSD" License as described in the "LICENSE" file, # This software is licensed under the "BSD" License as described in the "LICENSE" file,
# which should be included with this package. The terms are also available at # which should be included with this package. The terms are also available at

View File

@@ -1,5 +1,5 @@
/* /*
Copyright 2013 Hardcoded Software (http://www.hardcoded.net) Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
This software is licensed under the "BSD" License as described in the "LICENSE" file, This software is licensed under the "BSD" License as described in the "LICENSE" file,
which should be included with this package. The terms are also available at which should be included with this package. The terms are also available at

View File

@@ -1,5 +1,5 @@
/* /*
Copyright 2013 Hardcoded Software (http://www.hardcoded.net) Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
This software is licensed under the "BSD" License as described in the "LICENSE" file, This software is licensed under the "BSD" License as described in the "LICENSE" file,
which should be included with this package. The terms are also available at which should be included with this package. The terms are also available at

View File

@@ -1,5 +1,5 @@
/* /*
Copyright 2013 Hardcoded Software (http://www.hardcoded.net) Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
This software is licensed under the "BSD" License as described in the "LICENSE" file, This software is licensed under the "BSD" License as described in the "LICENSE" file,
which should be included with this package. The terms are also available at which should be included with this package. The terms are also available at

View File

@@ -1,5 +1,5 @@
/* /*
Copyright 2013 Hardcoded Software (http://www.hardcoded.net) Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
This software is licensed under the "BSD" License as described in the "LICENSE" file, This software is licensed under the "BSD" License as described in the "LICENSE" file,
which should be included with this package. The terms are also available at which should be included with this package. The terms are also available at

View File

@@ -29,7 +29,7 @@
<key>NSPrincipalClass</key> <key>NSPrincipalClass</key>
<string>NSApplication</string> <string>NSApplication</string>
<key>NSHumanReadableCopyright</key> <key>NSHumanReadableCopyright</key>
<string>© Hardcoded Software, 2013</string> <string>© Hardcoded Software, 2014</string>
<key>SUFeedURL</key> <key>SUFeedURL</key>
<string>http://www.hardcoded.net/updates/dupeguru.appcast</string> <string>http://www.hardcoded.net/updates/dupeguru.appcast</string>
<key>SUPublicDSAKeyFile</key> <key>SUPublicDSAKeyFile</key>

View File

@@ -1,5 +1,5 @@
/* /*
Copyright 2013 Hardcoded Software (http://www.hardcoded.net) Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
This software is licensed under the "BSD" License as described in the "LICENSE" file, This software is licensed under the "BSD" License as described in the "LICENSE" file,
which should be included with this package. The terms are also available at which should be included with this package. The terms are also available at

View File

@@ -1,5 +1,5 @@
/* /*
Copyright 2013 Hardcoded Software (http://www.hardcoded.net) Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
This software is licensed under the "BSD" License as described in the "LICENSE" file, This software is licensed under the "BSD" License as described in the "LICENSE" file,
which should be included with this package. The terms are also available at which should be included with this package. The terms are also available at

View File

@@ -1,4 +1,4 @@
# Copyright 2013 Hardcoded Software (http://www.hardcoded.net) # Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
# #
# This software is licensed under the "BSD" License as described in the "LICENSE" file, # This software is licensed under the "BSD" License as described in the "LICENSE" file,
# which should be included with this package. The terms are also available at # which should be included with this package. The terms are also available at

View File

@@ -2,8 +2,8 @@
# Created On: 2007-10-06 # Created On: 2007-10-06
# Copyright 2014 Hardcoded Software (http://www.hardcoded.net) # Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
# This software is licensed under the "BSD" License as described in the "LICENSE" file, # This software is licensed under the "BSD" License as described in the "LICENSE" file,
# which should be included with this package. The terms are also available at # which should be included with this package. The terms are also available at
# http://www.hardcoded.net/licenses/bsd_license # http://www.hardcoded.net/licenses/bsd_license
import logging import logging
@@ -26,7 +26,7 @@ def autoreleasepool(func):
def as_fetch(as_list, as_type, step_size=1000): def as_fetch(as_list, as_type, step_size=1000):
"""When fetching items from a very big list through applescript, the connection with the app """When fetching items from a very big list through applescript, the connection with the app
will timeout. This function is to circumvent that. 'as_type' is the type of the items in the will timeout. This function is to circumvent that. 'as_type' is the type of the items in the
list (found in appscript.k). If we don't pass it to the 'each' arg of 'count()', it doesn't work. list (found in appscript.k). If we don't pass it to the 'each' arg of 'count()', it doesn't work.
applescript is rather stupid...""" applescript is rather stupid..."""
result = [] result = []
@@ -66,7 +66,7 @@ def extract_tb_noline(tb):
def safe_format_exception(type, value, tb): def safe_format_exception(type, value, tb):
"""Format exception from type, value and tb and fallback if there's a problem. """Format exception from type, value and tb and fallback if there's a problem.
In some cases in threaded exceptions under Cocoa, I get tracebacks targeting pyc files instead In some cases in threaded exceptions under Cocoa, I get tracebacks targeting pyc files instead
of py files, which results in traceback.format_exception() trying to print lines from pyc files of py files, which results in traceback.format_exception() trying to print lines from pyc files
and then crashing when trying to interpret that binary data as utf-8. We want a fallback in and then crashing when trying to interpret that binary data as utf-8. We want a fallback in
@@ -113,5 +113,6 @@ def patch_threaded_job_performer():
# _async_run, under cocoa, has to be run within an autorelease pool to prevent leaks. # _async_run, under cocoa, has to be run within an autorelease pool to prevent leaks.
# You only need this patch is you use one of CocoaProxy's function (which allocate objc # You only need this patch is you use one of CocoaProxy's function (which allocate objc
# structures) inside a threaded job. # structures) inside a threaded job.
from jobprogress.performer import ThreadedJobPerformer from hscommon.jobprogress.performer import ThreadedJobPerformer
ThreadedJobPerformer._async_run = autoreleasepool(ThreadedJobPerformer._async_run) ThreadedJobPerformer._async_run = autoreleasepool(ThreadedJobPerformer._async_run)

View File

@@ -1,12 +1,11 @@
# Created By: Virgil Dupras # Created By: Virgil Dupras
# Created On: 2009-12-30 # Created On: 2009-12-30
# Copyright 2013 Hardcoded Software (http://www.hardcoded.net) # Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
# #
# This software is licensed under the "BSD" License as described in the "LICENSE" file, # This software is licensed under the "BSD" License as described in the "LICENSE" file,
# which should be included with this package. The terms are also available at # which should be included with this package. The terms are also available at
# http://www.hardcoded.net/licenses/bsd_license # http://www.hardcoded.net/licenses/bsd_license
import sys
from optparse import OptionParser from optparse import OptionParser
import json import json
@@ -29,11 +28,18 @@ def main(options):
if __name__ == '__main__': if __name__ == '__main__':
usage = "usage: %prog [options]" usage = "usage: %prog [options]"
parser = OptionParser(usage=usage) parser = OptionParser(usage=usage)
parser.add_option('--edition', dest='edition', parser.add_option(
help="dupeGuru edition to build (se, me or pe). Default is se.") '--edition', dest='edition',
parser.add_option('--ui', dest='ui', help="dupeGuru edition to build (se, me or pe). Default is se."
help="Type of UI to build. 'qt' or 'cocoa'. Default is determined by your system.") )
parser.add_option('--dev', action='store_true', dest='dev', default=False, parser.add_option(
help="If this flag is set, will configure for dev builds.") '--ui', dest='ui',
help="Type of UI to build. 'qt' or 'cocoa'. Default is determined by your system."
)
parser.add_option(
'--dev', action='store_true', dest='dev', default=False,
help="If this flag is set, will configure for dev builds."
)
(options, args) = parser.parse_args() (options, args) = parser.parse_args()
main(options) main(options)

View File

@@ -1,9 +1,9 @@
# Created By: Virgil Dupras # Created By: Virgil Dupras
# Created On: 2006/11/11 # Created On: 2006/11/11
# Copyright 2013 Hardcoded Software (http://www.hardcoded.net) # Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
# #
# This software is licensed under the "BSD" License as described in the "LICENSE" file, # This software is licensed under the "BSD" License as described in the "LICENSE" file,
# which should be included with this package. The terms are also available at # which should be included with this package. The terms are also available at
# http://www.hardcoded.net/licenses/bsd_license # http://www.hardcoded.net/licenses/bsd_license
import os import os
@@ -15,7 +15,7 @@ import time
import shutil import shutil
from send2trash import send2trash from send2trash import send2trash
from jobprogress import job from hscommon.jobprogress import job
from hscommon.notify import Broadcaster from hscommon.notify import Broadcaster
from hscommon.path import Path from hscommon.path import Path
from hscommon.conflict import smart_move, smart_copy from hscommon.conflict import smart_move, smart_copy
@@ -38,8 +38,10 @@ DEBUG_MODE_PREFERENCE = 'DebugMode'
MSG_NO_MARKED_DUPES = tr("There are no marked duplicates. Nothing has been done.") MSG_NO_MARKED_DUPES = tr("There are no marked duplicates. Nothing has been done.")
MSG_NO_SELECTED_DUPES = tr("There are no selected duplicates. Nothing has been done.") MSG_NO_SELECTED_DUPES = tr("There are no selected duplicates. Nothing has been done.")
MSG_MANY_FILES_TO_OPEN = tr("You're about to open many files at once. Depending on what those " MSG_MANY_FILES_TO_OPEN = tr(
"files are opened with, doing so can create quite a mess. Continue?") "You're about to open many files at once. Depending on what those "
"files are opened with, doing so can create quite a mess. Continue?"
)
class DestType: class DestType:
Direct = 0 Direct = 0
@@ -78,7 +80,7 @@ def format_words(w):
return '(%s)' % ', '.join(do_format(item) for item in w) return '(%s)' % ', '.join(do_format(item) for item in w)
else: else:
return w.replace('\n', ' ') return w.replace('\n', ' ')
return ', '.join(do_format(item) for item in w) return ', '.join(do_format(item) for item in w)
def format_perc(p): def format_perc(p):
@@ -110,33 +112,33 @@ def fix_surrogate_encoding(s, encoding='utf-8'):
class DupeGuru(Broadcaster): class DupeGuru(Broadcaster):
"""Holds everything together. """Holds everything together.
Instantiated once per running application, it holds a reference to every high-level object Instantiated once per running application, it holds a reference to every high-level object
whose reference needs to be held: :class:`~core.results.Results`, :class:`Scanner`, whose reference needs to be held: :class:`~core.results.Results`, :class:`Scanner`,
:class:`~core.directories.Directories`, :mod:`core.gui` instances, etc.. :class:`~core.directories.Directories`, :mod:`core.gui` instances, etc..
It also hosts high level methods and acts as a coordinator for all those elements. This is why It also hosts high level methods and acts as a coordinator for all those elements. This is why
some of its methods seem a bit shallow, like for example :meth:`mark_all` and some of its methods seem a bit shallow, like for example :meth:`mark_all` and
:meth:`remove_duplicates`. These methos are just proxies for a method in :attr:`results`, but :meth:`remove_duplicates`. These methos are just proxies for a method in :attr:`results`, but
they are also followed by a notification call which is very important if we want GUI elements they are also followed by a notification call which is very important if we want GUI elements
to be correctly notified of a change in the data they're presenting. to be correctly notified of a change in the data they're presenting.
.. attribute:: directories .. attribute:: directories
Instance of :class:`~core.directories.Directories`. It holds the current folder selection. Instance of :class:`~core.directories.Directories`. It holds the current folder selection.
.. attribute:: results .. attribute:: results
Instance of :class:`core.results.Results`. Holds the results of the latest scan. Instance of :class:`core.results.Results`. Holds the results of the latest scan.
.. attribute:: selected_dupes .. attribute:: selected_dupes
List of currently selected dupes from our :attr:`results`. Whenever the user changes its List of currently selected dupes from our :attr:`results`. Whenever the user changes its
selection at the UI level, :attr:`result_table` takes care of updating this attribute, so selection at the UI level, :attr:`result_table` takes care of updating this attribute, so
you can trust that it's always up-to-date. you can trust that it's always up-to-date.
.. attribute:: result_table .. attribute:: result_table
Instance of :mod:`meta-gui <core.gui>` table listing the results from :attr:`results` Instance of :mod:`meta-gui <core.gui>` table listing the results from :attr:`results`
""" """
#--- View interface #--- View interface
@@ -151,10 +153,10 @@ class DupeGuru(Broadcaster):
# show_problem_dialog() # show_problem_dialog()
# select_dest_folder(prompt: str) --> str # select_dest_folder(prompt: str) --> str
# select_dest_file(prompt: str, ext: str) --> str # select_dest_file(prompt: str, ext: str) --> str
# in fairware prompts, we don't mention the edition, it's too long.
PROMPT_NAME = "dupeGuru" PROMPT_NAME = "dupeGuru"
SCANNER_CLASS = scanner.Scanner
def __init__(self, view): def __init__(self, view):
if view.get_default(DEBUG_MODE_PREFERENCE): if view.get_default(DEBUG_MODE_PREFERENCE):
logging.getLogger().setLevel(logging.DEBUG) logging.getLogger().setLevel(logging.DEBUG)
@@ -166,7 +168,7 @@ class DupeGuru(Broadcaster):
os.makedirs(self.appdata) os.makedirs(self.appdata)
self.directories = directories.Directories() self.directories = directories.Directories()
self.results = results.Results(self) self.results = results.Results(self)
self.scanner = scanner.Scanner() self.scanner = self.SCANNER_CLASS()
self.options = { self.options = {
'escape_filter_regexp': True, 'escape_filter_regexp': True,
'clean_empty_dirs': False, 'clean_empty_dirs': False,
@@ -185,14 +187,14 @@ class DupeGuru(Broadcaster):
children = [self.result_table, self.directory_tree, self.stats_label, self.details_panel] children = [self.result_table, self.directory_tree, self.stats_label, self.details_panel]
for child in children: for child in children:
child.connect() child.connect()
#--- Virtual #--- Virtual
def _prioritization_categories(self): def _prioritization_categories(self):
raise NotImplementedError() raise NotImplementedError()
def _create_result_table(self): def _create_result_table(self):
raise NotImplementedError() raise NotImplementedError()
#--- Private #--- Private
def _get_dupe_sort_key(self, dupe, get_group, key, delta): def _get_dupe_sort_key(self, dupe, get_group, key, delta):
if key == 'marked': if key == 'marked':
@@ -212,7 +214,7 @@ class DupeGuru(Broadcaster):
same = cmp_value(dupe, key) == refval same = cmp_value(dupe, key) == refval
result = (same, result) result = (same, result)
return result return result
def _get_group_sort_key(self, group, key): def _get_group_sort_key(self, group, key):
if key == 'percentage': if key == 'percentage':
return group.percentage return group.percentage
@@ -221,15 +223,15 @@ class DupeGuru(Broadcaster):
if key == 'marked': if key == 'marked':
return len([dupe for dupe in group.dupes if self.results.is_marked(dupe)]) return len([dupe for dupe in group.dupes if self.results.is_marked(dupe)])
return cmp_value(group.ref, key) return cmp_value(group.ref, key)
def _do_delete(self, j, link_deleted, use_hardlinks, direct_deletion): def _do_delete(self, j, link_deleted, use_hardlinks, direct_deletion):
def op(dupe): def op(dupe):
j.add_progress() j.add_progress()
return self._do_delete_dupe(dupe, link_deleted, use_hardlinks, direct_deletion) return self._do_delete_dupe(dupe, link_deleted, use_hardlinks, direct_deletion)
j.start_job(self.results.mark_count) j.start_job(self.results.mark_count)
self.results.perform_on_marked(op, True) self.results.perform_on_marked(op, True)
def _do_delete_dupe(self, dupe, link_deleted, use_hardlinks, direct_deletion): def _do_delete_dupe(self, dupe, link_deleted, use_hardlinks, direct_deletion):
if not dupe.path.exists(): if not dupe.path.exists():
return return
@@ -248,11 +250,11 @@ class DupeGuru(Broadcaster):
linkfunc = os.link if use_hardlinks else os.symlink linkfunc = os.link if use_hardlinks else os.symlink
linkfunc(str(ref.path), str_path) linkfunc(str(ref.path), str_path)
self.clean_empty_dirs(dupe.path.parent()) self.clean_empty_dirs(dupe.path.parent())
def _create_file(self, path): def _create_file(self, path):
# We add fs.Folder to fileclasses in case the file we're loading contains folder paths. # We add fs.Folder to fileclasses in case the file we're loading contains folder paths.
return fs.get_file(path, self.directories.fileclasses + [fs.Folder]) return fs.get_file(path, self.directories.fileclasses + [fs.Folder])
def _get_file(self, str_path): def _get_file(self, str_path):
path = Path(str_path) path = Path(str_path)
f = self._create_file(path) f = self._create_file(path)
@@ -263,10 +265,12 @@ class DupeGuru(Broadcaster):
return f return f
except EnvironmentError: except EnvironmentError:
return None return None
def _get_export_data(self): def _get_export_data(self):
columns = [col for col in self.result_table.columns.ordered_columns columns = [
if col.visible and col.name != 'marked'] col for col in self.result_table.columns.ordered_columns
if col.visible and col.name != 'marked'
]
colnames = [col.display for col in columns] colnames = [col.display for col in columns]
rows = [] rows = []
for group_id, group in enumerate(self.results.groups): for group_id, group in enumerate(self.results.groups):
@@ -276,20 +280,25 @@ class DupeGuru(Broadcaster):
row.insert(0, group_id) row.insert(0, group_id)
rows.append(row) rows.append(row)
return colnames, rows return colnames, rows
def _results_changed(self): def _results_changed(self):
self.selected_dupes = [d for d in self.selected_dupes self.selected_dupes = [
if self.results.get_group_of_duplicate(d) is not None] d for d in self.selected_dupes
if self.results.get_group_of_duplicate(d) is not None
]
self.notify('results_changed') self.notify('results_changed')
def _start_job(self, jobid, func, args=()): def _start_job(self, jobid, func, args=()):
title = JOBID2TITLE[jobid] title = JOBID2TITLE[jobid]
try: try:
self.progress_window.run(jobid, title, func, args=args) self.progress_window.run(jobid, title, func, args=args)
except job.JobInProgressError: except job.JobInProgressError:
msg = tr("A previous action is still hanging in there. You can't start a new one yet. Wait a few seconds, then try again.") msg = tr(
"A previous action is still hanging in there. You can't start a new one yet. Wait "
"a few seconds, then try again."
)
self.view.show_message(msg) self.view.show_message(msg)
def _job_completed(self, jobid): def _job_completed(self, jobid):
if jobid == JobType.Scan: if jobid == JobType.Scan:
self._results_changed() self._results_changed()
@@ -312,7 +321,7 @@ class DupeGuru(Broadcaster):
JobType.Delete: tr("All marked files were successfully sent to Trash."), JobType.Delete: tr("All marked files were successfully sent to Trash."),
}[jobid] }[jobid]
self.view.show_message(msg) self.view.show_message(msg)
@staticmethod @staticmethod
def _remove_hardlink_dupes(files): def _remove_hardlink_dupes(files):
seen_inodes = set() seen_inodes = set()
@@ -327,19 +336,19 @@ class DupeGuru(Broadcaster):
seen_inodes.add(inode) seen_inodes.add(inode)
result.append(file) result.append(file)
return result return result
def _select_dupes(self, dupes): def _select_dupes(self, dupes):
if dupes == self.selected_dupes: if dupes == self.selected_dupes:
return return
self.selected_dupes = dupes self.selected_dupes = dupes
self.notify('dupes_selected') self.notify('dupes_selected')
#--- Public #--- Public
def add_directory(self, d): def add_directory(self, d):
"""Adds folder ``d`` to :attr:`directories`. """Adds folder ``d`` to :attr:`directories`.
Shows an error message dialog if something bad happens. Shows an error message dialog if something bad happens.
:param str d: path of folder to add :param str d: path of folder to add
""" """
try: try:
@@ -349,7 +358,7 @@ class DupeGuru(Broadcaster):
self.view.show_message(tr("'{}' already is in the list.").format(d)) self.view.show_message(tr("'{}' already is in the list.").format(d))
except directories.InvalidPathError: except directories.InvalidPathError:
self.view.show_message(tr("'{}' does not exist.").format(d)) self.view.show_message(tr("'{}' does not exist.").format(d))
def add_selected_to_ignore_list(self): def add_selected_to_ignore_list(self):
"""Adds :attr:`selected_dupes` to :attr:`scanner`'s ignore list. """Adds :attr:`selected_dupes` to :attr:`scanner`'s ignore list.
""" """
@@ -367,10 +376,10 @@ class DupeGuru(Broadcaster):
self.scanner.ignore_list.Ignore(str(other.path), str(dupe.path)) self.scanner.ignore_list.Ignore(str(other.path), str(dupe.path))
self.remove_duplicates(dupes) self.remove_duplicates(dupes)
self.ignore_list_dialog.refresh() self.ignore_list_dialog.refresh()
def apply_filter(self, filter): def apply_filter(self, filter):
"""Apply a filter ``filter`` to the results so that it shows only dupe groups that match it. """Apply a filter ``filter`` to the results so that it shows only dupe groups that match it.
:param str filter: filter to apply :param str filter: filter to apply
""" """
self.results.apply_filter(None) self.results.apply_filter(None)
@@ -379,12 +388,12 @@ class DupeGuru(Broadcaster):
filter = escape(filter, '*', '.') filter = escape(filter, '*', '.')
self.results.apply_filter(filter) self.results.apply_filter(filter)
self._results_changed() self._results_changed()
def clean_empty_dirs(self, path): def clean_empty_dirs(self, path):
if self.options['clean_empty_dirs']: if self.options['clean_empty_dirs']:
while delete_if_empty(path, ['.DS_Store']): while delete_if_empty(path, ['.DS_Store']):
path = path.parent() path = path.parent()
def copy_or_move(self, dupe, copy: bool, destination: str, dest_type: DestType): def copy_or_move(self, dupe, copy: bool, destination: str, dest_type: DestType):
source_path = dupe.path source_path = dupe.path
location_path = first(p for p in self.directories if dupe.path in p) location_path = first(p for p in self.directories if dupe.path in p)
@@ -406,20 +415,20 @@ class DupeGuru(Broadcaster):
else: else:
smart_move(source_path, dest_path) smart_move(source_path, dest_path)
self.clean_empty_dirs(source_path.parent()) self.clean_empty_dirs(source_path.parent())
def copy_or_move_marked(self, copy): def copy_or_move_marked(self, copy):
"""Start an async move (or copy) job on marked duplicates. """Start an async move (or copy) job on marked duplicates.
:param bool copy: If True, duplicates will be copied instead of moved :param bool copy: If True, duplicates will be copied instead of moved
""" """
def do(j): def do(j):
def op(dupe): def op(dupe):
j.add_progress() j.add_progress()
self.copy_or_move(dupe, copy, destination, desttype) self.copy_or_move(dupe, copy, destination, desttype)
j.start_job(self.results.mark_count) j.start_job(self.results.mark_count)
self.results.perform_on_marked(op, not copy) self.results.perform_on_marked(op, not copy)
if not self.results.mark_count: if not self.results.mark_count:
self.view.show_message(MSG_NO_MARKED_DUPES) self.view.show_message(MSG_NO_MARKED_DUPES)
return return
@@ -430,7 +439,7 @@ class DupeGuru(Broadcaster):
desttype = self.options['copymove_dest_type'] desttype = self.options['copymove_dest_type']
jobid = JobType.Copy if copy else JobType.Move jobid = JobType.Copy if copy else JobType.Move
self._start_job(jobid, do) self._start_job(jobid, do)
def delete_marked(self): def delete_marked(self):
"""Start an async job to send marked duplicates to the trash. """Start an async job to send marked duplicates to the trash.
""" """
@@ -439,14 +448,16 @@ class DupeGuru(Broadcaster):
return return
if not self.deletion_options.show(self.results.mark_count): if not self.deletion_options.show(self.results.mark_count):
return return
args = [self.deletion_options.link_deleted, self.deletion_options.use_hardlinks, args = [
self.deletion_options.direct] self.deletion_options.link_deleted, self.deletion_options.use_hardlinks,
self.deletion_options.direct
]
logging.debug("Starting deletion job with args %r", args) logging.debug("Starting deletion job with args %r", args)
self._start_job(JobType.Delete, self._do_delete, args=args) self._start_job(JobType.Delete, self._do_delete, args=args)
def export_to_xhtml(self): def export_to_xhtml(self):
"""Export current results to XHTML. """Export current results to XHTML.
The configuration of the :attr:`result_table` (columns order and visibility) is used to The configuration of the :attr:`result_table` (columns order and visibility) is used to
determine how the data is presented in the export. In other words, the exported table in determine how the data is presented in the export. In other words, the exported table in
the resulting XHTML will look just like the results table. the resulting XHTML will look just like the results table.
@@ -454,18 +465,21 @@ class DupeGuru(Broadcaster):
colnames, rows = self._get_export_data() colnames, rows = self._get_export_data()
export_path = export.export_to_xhtml(colnames, rows) export_path = export.export_to_xhtml(colnames, rows)
desktop.open_path(export_path) desktop.open_path(export_path)
def export_to_csv(self): def export_to_csv(self):
"""Export current results to CSV. """Export current results to CSV.
The columns and their order in the resulting CSV file is determined in the same way as in The columns and their order in the resulting CSV file is determined in the same way as in
:meth:`export_to_xhtml`. :meth:`export_to_xhtml`.
""" """
dest_file = self.view.select_dest_file(tr("Select a destination for your exported CSV"), 'csv') dest_file = self.view.select_dest_file(tr("Select a destination for your exported CSV"), 'csv')
if dest_file: if dest_file:
colnames, rows = self._get_export_data() colnames, rows = self._get_export_data()
export.export_to_csv(dest_file, colnames, rows) try:
export.export_to_csv(dest_file, colnames, rows)
except OSError as e:
self.view.show_message(tr("Couldn't write to file: {}").format(str(e)))
def get_display_info(self, dupe, group, delta=False): def get_display_info(self, dupe, group, delta=False):
def empty_data(): def empty_data():
return {c.name: '---' for c in self.result_table.COLUMNS[1:]} return {c.name: '---' for c in self.result_table.COLUMNS[1:]}
@@ -476,10 +490,10 @@ class DupeGuru(Broadcaster):
except Exception as e: except Exception as e:
logging.warning("Exception on GetDisplayInfo for %s: %s", str(dupe.path), str(e)) logging.warning("Exception on GetDisplayInfo for %s: %s", str(dupe.path), str(e))
return empty_data() return empty_data()
def invoke_custom_command(self): def invoke_custom_command(self):
"""Calls command in ``CustomCommand`` pref with ``%d`` and ``%r`` placeholders replaced. """Calls command in ``CustomCommand`` pref with ``%d`` and ``%r`` placeholders replaced.
Using the current selection, ``%d`` is replaced with the currently selected dupe and ``%r`` Using the current selection, ``%d`` is replaced with the currently selected dupe and ``%r``
is replaced with that dupe's ref file. If there's no selection, the command is not invoked. is replaced with that dupe's ref file. If there's no selection, the command is not invoked.
If the dupe is a ref, ``%d`` and ``%r`` will be the same. If the dupe is a ref, ``%d`` and ``%r`` will be the same.
@@ -506,10 +520,10 @@ class DupeGuru(Broadcaster):
subprocess.Popen(exename + args, shell=True, cwd=path) subprocess.Popen(exename + args, shell=True, cwd=path)
else: else:
subprocess.Popen(cmd, shell=True) subprocess.Popen(cmd, shell=True)
def load(self): def load(self):
"""Load directory selection and ignore list from files in appdata. """Load directory selection and ignore list from files in appdata.
This method is called during startup so that directory selection and ignore list, which This method is called during startup so that directory selection and ignore list, which
is persistent data, is the same as when the last session was closed (when :meth:`save` was is persistent data, is the same as when the last session was closed (when :meth:`save` was
called). called).
@@ -519,19 +533,19 @@ class DupeGuru(Broadcaster):
p = op.join(self.appdata, 'ignore_list.xml') p = op.join(self.appdata, 'ignore_list.xml')
self.scanner.ignore_list.load_from_xml(p) self.scanner.ignore_list.load_from_xml(p)
self.ignore_list_dialog.refresh() self.ignore_list_dialog.refresh()
def load_from(self, filename): def load_from(self, filename):
"""Start an async job to load results from ``filename``. """Start an async job to load results from ``filename``.
:param str filename: path of the XML file (created with :meth:`save_as`) to load :param str filename: path of the XML file (created with :meth:`save_as`) to load
""" """
def do(j): def do(j):
self.results.load_from_xml(filename, self._get_file, j) self.results.load_from_xml(filename, self._get_file, j)
self._start_job(JobType.Load, do) self._start_job(JobType.Load, do)
def make_selected_reference(self): def make_selected_reference(self):
"""Promote :attr:`selected_dupes` to reference position within their respective groups. """Promote :attr:`selected_dupes` to reference position within their respective groups.
Each selected dupe will become the :attr:`~core.engine.Group.ref` of its group. If there's Each selected dupe will become the :attr:`~core.engine.Group.ref` of its group. If there's
more than one dupe selected for the same group, only the first (in the order currently shown more than one dupe selected for the same group, only the first (in the order currently shown
in :attr:`result_table`) dupe will be promoted. in :attr:`result_table`) dupe will be promoted.
@@ -550,8 +564,10 @@ class DupeGuru(Broadcaster):
# If no group was changed, however, we don't touch the selection. # If no group was changed, however, we don't touch the selection.
if not self.result_table.power_marker: if not self.result_table.power_marker:
if changed_groups: if changed_groups:
self.selected_dupes = [d for d in self.selected_dupes self.selected_dupes = [
if self.results.get_group_of_duplicate(d).ref is d] d for d in self.selected_dupes
if self.results.get_group_of_duplicate(d).ref is d
]
self.notify('results_changed') self.notify('results_changed')
else: else:
# If we're in "Dupes Only" mode (previously called Power Marker), things are a bit # If we're in "Dupes Only" mode (previously called Power Marker), things are a bit
@@ -560,28 +576,28 @@ class DupeGuru(Broadcaster):
# do is to keep our selection index-wise (different dupe selection, but same index # do is to keep our selection index-wise (different dupe selection, but same index
# selection). # selection).
self.notify('results_changed_but_keep_selection') self.notify('results_changed_but_keep_selection')
def mark_all(self): def mark_all(self):
"""Set all dupes in the results as marked. """Set all dupes in the results as marked.
""" """
self.results.mark_all() self.results.mark_all()
self.notify('marking_changed') self.notify('marking_changed')
def mark_none(self): def mark_none(self):
"""Set all dupes in the results as unmarked. """Set all dupes in the results as unmarked.
""" """
self.results.mark_none() self.results.mark_none()
self.notify('marking_changed') self.notify('marking_changed')
def mark_invert(self): def mark_invert(self):
"""Invert the marked state of all dupes in the results. """Invert the marked state of all dupes in the results.
""" """
self.results.mark_invert() self.results.mark_invert()
self.notify('marking_changed') self.notify('marking_changed')
def mark_dupe(self, dupe, marked): def mark_dupe(self, dupe, marked):
"""Change marked status of ``dupe``. """Change marked status of ``dupe``.
:param dupe: dupe to mark/unmark :param dupe: dupe to mark/unmark
:type dupe: :class:`~core.fs.File` :type dupe: :class:`~core.fs.File`
:param bool marked: True = mark, False = unmark :param bool marked: True = mark, False = unmark
@@ -591,7 +607,7 @@ class DupeGuru(Broadcaster):
else: else:
self.results.unmark(dupe) self.results.unmark(dupe)
self.notify('marking_changed') self.notify('marking_changed')
def open_selected(self): def open_selected(self):
"""Open :attr:`selected_dupes` with their associated application. """Open :attr:`selected_dupes` with their associated application.
""" """
@@ -600,16 +616,16 @@ class DupeGuru(Broadcaster):
return return
for dupe in self.selected_dupes: for dupe in self.selected_dupes:
desktop.open_path(dupe.path) desktop.open_path(dupe.path)
def purge_ignore_list(self): def purge_ignore_list(self):
"""Remove files that don't exist from :attr:`ignore_list`. """Remove files that don't exist from :attr:`ignore_list`.
""" """
self.scanner.ignore_list.Filter(lambda f,s:op.exists(f) and op.exists(s)) self.scanner.ignore_list.Filter(lambda f, s: op.exists(f) and op.exists(s))
self.ignore_list_dialog.refresh() self.ignore_list_dialog.refresh()
def remove_directories(self, indexes): def remove_directories(self, indexes):
"""Remove root directories at ``indexes`` from :attr:`directories`. """Remove root directories at ``indexes`` from :attr:`directories`.
:param indexes: Indexes of the directories to remove. :param indexes: Indexes of the directories to remove.
:type indexes: list of int :type indexes: list of int
""" """
@@ -620,30 +636,30 @@ class DupeGuru(Broadcaster):
self.notify('directories_changed') self.notify('directories_changed')
except IndexError: except IndexError:
pass pass
def remove_duplicates(self, duplicates): def remove_duplicates(self, duplicates):
"""Remove ``duplicates`` from :attr:`results`. """Remove ``duplicates`` from :attr:`results`.
Calls :meth:`~core.results.Results.remove_duplicates` and send appropriate notifications. Calls :meth:`~core.results.Results.remove_duplicates` and send appropriate notifications.
:param duplicates: duplicates to remove. :param duplicates: duplicates to remove.
:type duplicates: list of :class:`~core.fs.File` :type duplicates: list of :class:`~core.fs.File`
""" """
self.results.remove_duplicates(self.without_ref(duplicates)) self.results.remove_duplicates(self.without_ref(duplicates))
self.notify('results_changed_but_keep_selection') self.notify('results_changed_but_keep_selection')
def remove_marked(self): def remove_marked(self):
"""Removed marked duplicates from the results (without touching the files themselves). """Removed marked duplicates from the results (without touching the files themselves).
""" """
if not self.results.mark_count: if not self.results.mark_count:
self.view.show_message(MSG_NO_MARKED_DUPES) self.view.show_message(MSG_NO_MARKED_DUPES)
return return
msg = tr("You are about to remove %d files from results. Continue?") msg = tr("You are about to remove %d files from results. Continue?")
if not self.view.ask_yes_no(msg % self.results.mark_count): if not self.view.ask_yes_no(msg % self.results.mark_count):
return return
self.results.perform_on_marked(lambda x:None, True) self.results.perform_on_marked(lambda x: None, True)
self._results_changed() self._results_changed()
def remove_selected(self): def remove_selected(self):
"""Removed :attr:`selected_dupes` from the results (without touching the files themselves). """Removed :attr:`selected_dupes` from the results (without touching the files themselves).
""" """
@@ -651,16 +667,16 @@ class DupeGuru(Broadcaster):
if not dupes: if not dupes:
self.view.show_message(MSG_NO_SELECTED_DUPES) self.view.show_message(MSG_NO_SELECTED_DUPES)
return return
msg = tr("You are about to remove %d files from results. Continue?") msg = tr("You are about to remove %d files from results. Continue?")
if not self.view.ask_yes_no(msg % len(dupes)): if not self.view.ask_yes_no(msg % len(dupes)):
return return
self.remove_duplicates(dupes) self.remove_duplicates(dupes)
def rename_selected(self, newname): def rename_selected(self, newname):
"""Renames the selected dupes's file to ``newname``. """Renames the selected dupes's file to ``newname``.
If there's more than one selected dupes, the first one is used. If there's more than one selected dupes, the first one is used.
:param str newname: The filename to rename the dupe's file to. :param str newname: The filename to rename the dupe's file to.
""" """
try: try:
@@ -670,13 +686,13 @@ class DupeGuru(Broadcaster):
except (IndexError, fs.FSError) as e: except (IndexError, fs.FSError) as e:
logging.warning("dupeGuru Warning: %s" % str(e)) logging.warning("dupeGuru Warning: %s" % str(e))
return False return False
def reprioritize_groups(self, sort_key): def reprioritize_groups(self, sort_key):
"""Sort dupes in each group (in :attr:`results`) according to ``sort_key``. """Sort dupes in each group (in :attr:`results`) according to ``sort_key``.
Called by the re-prioritize dialog. Calls :meth:`~core.engine.Group.prioritize` and, once Called by the re-prioritize dialog. Calls :meth:`~core.engine.Group.prioritize` and, once
the sorting is done, show a message that confirms the action. the sorting is done, show a message that confirms the action.
:param sort_key: The key being sent to :meth:`~core.engine.Group.prioritize` :param sort_key: The key being sent to :meth:`~core.engine.Group.prioritize`
:type sort_key: f(dupe) :type sort_key: f(dupe)
""" """
@@ -687,11 +703,11 @@ class DupeGuru(Broadcaster):
self._results_changed() self._results_changed()
msg = tr("{} duplicate groups were changed by the re-prioritization.").format(count) msg = tr("{} duplicate groups were changed by the re-prioritization.").format(count)
self.view.show_message(msg) self.view.show_message(msg)
def reveal_selected(self): def reveal_selected(self):
if self.selected_dupes: if self.selected_dupes:
desktop.reveal_path(self.selected_dupes[0].path) desktop.reveal_path(self.selected_dupes[0].path)
def save(self): def save(self):
if not op.exists(self.appdata): if not op.exists(self.appdata):
os.makedirs(self.appdata) os.makedirs(self.appdata)
@@ -699,17 +715,20 @@ class DupeGuru(Broadcaster):
p = op.join(self.appdata, 'ignore_list.xml') p = op.join(self.appdata, 'ignore_list.xml')
self.scanner.ignore_list.save_to_xml(p) self.scanner.ignore_list.save_to_xml(p)
self.notify('save_session') self.notify('save_session')
def save_as(self, filename): def save_as(self, filename):
"""Save results in ``filename``. """Save results in ``filename``.
:param str filename: path of the file to save results (as XML) to. :param str filename: path of the file to save results (as XML) to.
""" """
self.results.save_to_xml(filename) try:
self.results.save_to_xml(filename)
except OSError as e:
self.view.show_message(tr("Couldn't write to file: {}").format(str(e)))
def start_scanning(self): def start_scanning(self):
"""Starts an async job to scan for duplicates. """Starts an async job to scan for duplicates.
Scans folders selected in :attr:`directories` and put the results in :attr:`results` Scans folders selected in :attr:`directories` and put the results in :attr:`results`
""" """
def do(j): def do(j):
@@ -722,14 +741,14 @@ class DupeGuru(Broadcaster):
files = self._remove_hardlink_dupes(files) files = self._remove_hardlink_dupes(files)
logging.info('Scanning %d files' % len(files)) logging.info('Scanning %d files' % len(files))
self.results.groups = self.scanner.get_dupe_groups(files, j) self.results.groups = self.scanner.get_dupe_groups(files, j)
if not self.directories.has_any_file(): if not self.directories.has_any_file():
self.view.show_message(tr("The selected directories contain no scannable file.")) self.view.show_message(tr("The selected directories contain no scannable file."))
return return
self.results.groups = [] self.results.groups = []
self._results_changed() self._results_changed()
self._start_job(JobType.Scan, do) self._start_job(JobType.Scan, do)
def toggle_selected_mark_state(self): def toggle_selected_mark_state(self):
selected = self.without_ref(self.selected_dupes) selected = self.without_ref(self.selected_dupes)
if not selected: if not selected:
@@ -741,12 +760,12 @@ class DupeGuru(Broadcaster):
for dupe in selected: for dupe in selected:
markfunc(dupe) markfunc(dupe)
self.notify('marking_changed') self.notify('marking_changed')
def without_ref(self, dupes): def without_ref(self, dupes):
"""Returns ``dupes`` with all reference elements removed. """Returns ``dupes`` with all reference elements removed.
""" """
return [dupe for dupe in dupes if self.results.get_group_of_duplicate(dupe).ref is not dupe] return [dupe for dupe in dupes if self.results.get_group_of_duplicate(dupe).ref is not dupe]
def get_default(self, key, fallback_value=None): def get_default(self, key, fallback_value=None):
result = nonone(self.view.get_default(key), fallback_value) result = nonone(self.view.get_default(key), fallback_value)
if fallback_value is not None and not isinstance(result, type(fallback_value)): if fallback_value is not None and not isinstance(result, type(fallback_value)):
@@ -756,10 +775,10 @@ class DupeGuru(Broadcaster):
except Exception: except Exception:
result = fallback_value result = fallback_value
return result return result
def set_default(self, key, value): def set_default(self, key, value):
self.view.set_default(key, value) self.view.set_default(key, value)
#--- Properties #--- Properties
@property @property
def stat_line(self): def stat_line(self):
@@ -767,4 +786,4 @@ class DupeGuru(Broadcaster):
if self.scanner.discarded_file_count: if self.scanner.discarded_file_count:
result = tr("%s (%d discarded)") % (result, self.scanner.discarded_file_count) result = tr("%s (%d discarded)") % (result, self.scanner.discarded_file_count)
return result return result

View File

@@ -1,15 +1,15 @@
# Created By: Virgil Dupras # Created By: Virgil Dupras
# Created On: 2006/02/27 # Created On: 2006/02/27
# Copyright 2013 Hardcoded Software (http://www.hardcoded.net) # Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
# #
# This software is licensed under the "BSD" License as described in the "LICENSE" file, # This software is licensed under the "BSD" License as described in the "LICENSE" file,
# which should be included with this package. The terms are also available at # which should be included with this package. The terms are also available at
# http://www.hardcoded.net/licenses/bsd_license # http://www.hardcoded.net/licenses/bsd_license
from xml.etree import ElementTree as ET from xml.etree import ElementTree as ET
import logging import logging
from jobprogress import job from hscommon.jobprogress import job
from hscommon.path import Path from hscommon.path import Path
from hscommon.util import FileOrPath from hscommon.util import FileOrPath
@@ -24,7 +24,7 @@ __all__ = [
class DirectoryState: class DirectoryState:
"""Enum describing how a folder should be considered. """Enum describing how a folder should be considered.
* DirectoryState.Normal: Scan all files normally * DirectoryState.Normal: Scan all files normally
* DirectoryState.Reference: Scan files, but make sure never to delete any of them * DirectoryState.Reference: Scan files, but make sure never to delete any of them
* DirectoryState.Excluded: Don't scan this folder * DirectoryState.Excluded: Don't scan this folder
@@ -41,10 +41,10 @@ class InvalidPathError(Exception):
class Directories: class Directories:
"""Holds user folder selection. """Holds user folder selection.
Manages the selection that the user make through the folder selection dialog. It also manages Manages the selection that the user make through the folder selection dialog. It also manages
folder states, and how recursion applies to them. folder states, and how recursion applies to them.
Then, when the user starts the scan, :meth:`get_files` is called to retrieve all files (wrapped Then, when the user starts the scan, :meth:`get_files` is called to retrieve all files (wrapped
in :mod:`core.fs`) that have to be scanned according to the chosen folders/states. in :mod:`core.fs`) that have to be scanned according to the chosen folders/states.
""" """
@@ -55,28 +55,28 @@ class Directories:
self.states = {} self.states = {}
self.fileclasses = fileclasses self.fileclasses = fileclasses
self.folderclass = fs.Folder self.folderclass = fs.Folder
def __contains__(self, path): def __contains__(self, path):
for p in self._dirs: for p in self._dirs:
if path in p: if path in p:
return True return True
return False return False
def __delitem__(self,key): def __delitem__(self, key):
self._dirs.__delitem__(key) self._dirs.__delitem__(key)
def __getitem__(self,key): def __getitem__(self, key):
return self._dirs.__getitem__(key) return self._dirs.__getitem__(key)
def __len__(self): def __len__(self):
return len(self._dirs) return len(self._dirs)
#---Private #---Private
def _default_state_for_path(self, path): def _default_state_for_path(self, path):
# Override this in subclasses to specify the state of some special folders. # Override this in subclasses to specify the state of some special folders.
if path.name.startswith('.'): # hidden if path.name.startswith('.'): # hidden
return DirectoryState.Excluded return DirectoryState.Excluded
def _get_files(self, from_path, j): def _get_files(self, from_path, j):
j.check_if_cancelled() j.check_if_cancelled()
state = self.get_state(from_path) state = self.get_state(from_path)
@@ -95,14 +95,15 @@ class Directories:
file.is_ref = state == DirectoryState.Reference file.is_ref = state == DirectoryState.Reference
filepaths.add(file.path) filepaths.add(file.path)
yield file yield file
# it's possible that a folder (bundle) gets into the file list. in that case, we don't want to recurse into it # it's possible that a folder (bundle) gets into the file list. in that case, we don't
# want to recurse into it
subfolders = [p for p in from_path.listdir() if not p.islink() and p.isdir() and p not in filepaths] subfolders = [p for p in from_path.listdir() if not p.islink() and p.isdir() and p not in filepaths]
for subfolder in subfolders: for subfolder in subfolders:
for file in self._get_files(subfolder, j): for file in self._get_files(subfolder, j):
yield file yield file
except (EnvironmentError, fs.InvalidPath): except (EnvironmentError, fs.InvalidPath):
pass pass
def _get_folders(self, from_folder, j): def _get_folders(self, from_folder, j):
j.check_if_cancelled() j.check_if_cancelled()
try: try:
@@ -116,16 +117,16 @@ class Directories:
yield from_folder yield from_folder
except (EnvironmentError, fs.InvalidPath): except (EnvironmentError, fs.InvalidPath):
pass pass
#---Public #---Public
def add_path(self, path): def add_path(self, path):
"""Adds ``path`` to self, if not already there. """Adds ``path`` to self, if not already there.
Raises :exc:`AlreadyThereError` if ``path`` is already in self. If path is a directory Raises :exc:`AlreadyThereError` if ``path`` is already in self. If path is a directory
containing some of the directories already present in self, ``path`` will be added, but all containing some of the directories already present in self, ``path`` will be added, but all
directories under it will be removed. Can also raise :exc:`InvalidPathError` if ``path`` directories under it will be removed. Can also raise :exc:`InvalidPathError` if ``path``
does not exist. does not exist.
:param Path path: path to add :param Path path: path to add
""" """
if path in self: if path in self:
@@ -134,43 +135,43 @@ class Directories:
raise InvalidPathError() raise InvalidPathError()
self._dirs = [p for p in self._dirs if p not in path] self._dirs = [p for p in self._dirs if p not in path]
self._dirs.append(path) self._dirs.append(path)
@staticmethod @staticmethod
def get_subfolders(path): def get_subfolders(path):
"""Returns a sorted list of paths corresponding to subfolders in ``path``. """Returns a sorted list of paths corresponding to subfolders in ``path``.
:param Path path: get subfolders from there :param Path path: get subfolders from there
:rtype: list of Path :rtype: list of Path
""" """
try: try:
subpaths = [p for p in path.listdir() if p.isdir()] subpaths = [p for p in path.listdir() if p.isdir()]
subpaths.sort(key=lambda x:x.name.lower()) subpaths.sort(key=lambda x: x.name.lower())
return subpaths return subpaths
except EnvironmentError: except EnvironmentError:
return [] return []
def get_files(self, j=job.nulljob): def get_files(self, j=job.nulljob):
"""Returns a list of all files that are not excluded. """Returns a list of all files that are not excluded.
Returned files also have their ``is_ref`` attr set if applicable. Returned files also have their ``is_ref`` attr set if applicable.
""" """
for path in self._dirs: for path in self._dirs:
for file in self._get_files(path, j): for file in self._get_files(path, j):
yield file yield file
def get_folders(self, j=job.nulljob): def get_folders(self, j=job.nulljob):
"""Returns a list of all folders that are not excluded. """Returns a list of all folders that are not excluded.
Returned folders also have their ``is_ref`` attr set if applicable. Returned folders also have their ``is_ref`` attr set if applicable.
""" """
for path in self._dirs: for path in self._dirs:
from_folder = self.folderclass(path) from_folder = self.folderclass(path)
for folder in self._get_folders(from_folder, j): for folder in self._get_folders(from_folder, j):
yield folder yield folder
def get_state(self, path): def get_state(self, path):
"""Returns the state of ``path``. """Returns the state of ``path``.
:rtype: :class:`DirectoryState` :rtype: :class:`DirectoryState`
""" """
if path in self.states: if path in self.states:
@@ -183,12 +184,12 @@ class Directories:
return self.get_state(parent) return self.get_state(parent)
else: else:
return DirectoryState.Normal return DirectoryState.Normal
def has_any_file(self): def has_any_file(self):
"""Returns whether selected folders contain any file. """Returns whether selected folders contain any file.
Because it stops at the first file it finds, it's much faster than get_files(). Because it stops at the first file it finds, it's much faster than get_files().
:rtype: bool :rtype: bool
""" """
try: try:
@@ -196,10 +197,10 @@ class Directories:
return True return True
except StopIteration: except StopIteration:
return False return False
def load_from_file(self, infile): def load_from_file(self, infile):
"""Load folder selection from ``infile``. """Load folder selection from ``infile``.
:param file infile: path or file pointer to XML generated through :meth:`save_to_file` :param file infile: path or file pointer to XML generated through :meth:`save_to_file`
""" """
try: try:
@@ -222,10 +223,10 @@ class Directories:
path = attrib['path'] path = attrib['path']
state = attrib['value'] state = attrib['value']
self.states[Path(path)] = int(state) self.states[Path(path)] = int(state)
def save_to_file(self, outfile): def save_to_file(self, outfile):
"""Save folder selection as XML to ``outfile``. """Save folder selection as XML to ``outfile``.
:param file outfile: path or file pointer to XML file to save to. :param file outfile: path or file pointer to XML file to save to.
""" """
with FileOrPath(outfile, 'wb') as fp: with FileOrPath(outfile, 'wb') as fp:
@@ -239,10 +240,10 @@ class Directories:
state_node.set('value', str(state)) state_node.set('value', str(state))
tree = ET.ElementTree(root) tree = ET.ElementTree(root)
tree.write(fp, encoding='utf-8') tree.write(fp, encoding='utf-8')
def set_state(self, path, state): def set_state(self, path, state):
"""Set the state of folder at ``path``. """Set the state of folder at ``path``.
:param Path path: path of the target folder :param Path path: path of the target folder
:param state: state to set folder to :param state: state to set folder to
:type state: :class:`DirectoryState` :type state: :class:`DirectoryState`
@@ -253,4 +254,4 @@ class Directories:
if path.is_parent_of(iter_path): if path.is_parent_of(iter_path):
del self.states[iter_path] del self.states[iter_path]
self.states[path] = state self.states[path] = state

View File

@@ -1,9 +1,9 @@
# Created By: Virgil Dupras # Created By: Virgil Dupras
# Created On: 2006/01/29 # Created On: 2006/01/29
# Copyright 2013 Hardcoded Software (http://www.hardcoded.net) # Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
# #
# This software is licensed under the "BSD" License as described in the "LICENSE" file, # This software is licensed under the "BSD" License as described in the "LICENSE" file,
# which should be included with this package. The terms are also available at # which should be included with this package. The terms are also available at
# http://www.hardcoded.net/licenses/bsd_license # http://www.hardcoded.net/licenses/bsd_license
import difflib import difflib
@@ -15,11 +15,13 @@ from unicodedata import normalize
from hscommon.util import flatten, multi_replace from hscommon.util import flatten, multi_replace
from hscommon.trans import tr from hscommon.trans import tr
from jobprogress import job from hscommon.jobprogress import job
(WEIGHT_WORDS, (
MATCH_SIMILAR_WORDS, WEIGHT_WORDS,
NO_FIELD_ORDER) = range(3) MATCH_SIMILAR_WORDS,
NO_FIELD_ORDER,
) = range(3)
JOB_REFRESH_RATE = 100 JOB_REFRESH_RATE = 100
@@ -45,7 +47,7 @@ def unpack_fields(fields):
def compare(first, second, flags=()): def compare(first, second, flags=()):
"""Returns the % of words that match between ``first`` and ``second`` """Returns the % of words that match between ``first`` and ``second``
The result is a ``int`` in the range 0..100. The result is a ``int`` in the range 0..100.
``first`` and ``second`` can be either a string or a list (of words). ``first`` and ``second`` can be either a string or a list (of words).
""" """
@@ -53,7 +55,7 @@ def compare(first, second, flags=()):
return 0 return 0
if any(isinstance(element, list) for element in first): if any(isinstance(element, list) for element in first):
return compare_fields(first, second, flags) return compare_fields(first, second, flags)
second = second[:] #We must use a copy of second because we remove items from it second = second[:] #We must use a copy of second because we remove items from it
match_similar = MATCH_SIMILAR_WORDS in flags match_similar = MATCH_SIMILAR_WORDS in flags
weight_words = WEIGHT_WORDS in flags weight_words = WEIGHT_WORDS in flags
joined = first + second joined = first + second
@@ -77,9 +79,9 @@ def compare(first, second, flags=()):
def compare_fields(first, second, flags=()): def compare_fields(first, second, flags=()):
"""Returns the score for the lowest matching :ref:`fields`. """Returns the score for the lowest matching :ref:`fields`.
``first`` and ``second`` must be lists of lists of string. Each sub-list is then compared with ``first`` and ``second`` must be lists of lists of string. Each sub-list is then compared with
:func:`compare`. :func:`compare`.
""" """
if len(first) != len(second): if len(first) != len(second):
return 0 return 0
@@ -104,10 +106,10 @@ def compare_fields(first, second, flags=()):
def build_word_dict(objects, j=job.nulljob): def build_word_dict(objects, j=job.nulljob):
"""Returns a dict of objects mapped by their words. """Returns a dict of objects mapped by their words.
objects must have a ``words`` attribute being a list of strings or a list of lists of strings objects must have a ``words`` attribute being a list of strings or a list of lists of strings
(:ref:`fields`). (:ref:`fields`).
The result will be a dict with words as keys, lists of objects as values. The result will be a dict with words as keys, lists of objects as values.
""" """
result = defaultdict(set) result = defaultdict(set)
@@ -118,7 +120,7 @@ def build_word_dict(objects, j=job.nulljob):
def merge_similar_words(word_dict): def merge_similar_words(word_dict):
"""Take all keys in ``word_dict`` that are similar, and merge them together. """Take all keys in ``word_dict`` that are similar, and merge them together.
``word_dict`` has been built with :func:`build_word_dict`. Similarity is computed with Python's ``word_dict`` has been built with :func:`build_word_dict`. Similarity is computed with Python's
``difflib.get_close_matches()``, which computes the number of edits that are necessary to make ``difflib.get_close_matches()``, which computes the number of edits that are necessary to make
a word equal to the other. a word equal to the other.
@@ -138,9 +140,9 @@ def merge_similar_words(word_dict):
def reduce_common_words(word_dict, threshold): def reduce_common_words(word_dict, threshold):
"""Remove all objects from ``word_dict`` values where the object count >= ``threshold`` """Remove all objects from ``word_dict`` values where the object count >= ``threshold``
``word_dict`` has been built with :func:`build_word_dict`. ``word_dict`` has been built with :func:`build_word_dict`.
The exception to this removal are the objects where all the words of the object are common. The exception to this removal are the objects where all the words of the object are common.
Because if we remove them, we will miss some duplicates! Because if we remove them, we will miss some duplicates!
""" """
@@ -181,17 +183,17 @@ class Match(namedtuple('Match', 'first second percentage')):
exact scan methods, such as Contents scans, this will always be 100. exact scan methods, such as Contents scans, this will always be 100.
""" """
__slots__ = () __slots__ = ()
def get_match(first, second, flags=()): def get_match(first, second, flags=()):
#it is assumed here that first and second both have a "words" attribute #it is assumed here that first and second both have a "words" attribute
percentage = compare(first.words, second.words, flags) percentage = compare(first.words, second.words, flags)
return Match(first, second, percentage) return Match(first, second, percentage)
def getmatches( def getmatches(
objects, min_match_percentage=0, match_similar_words=False, weight_words=False, objects, min_match_percentage=0, match_similar_words=False, weight_words=False,
no_field_order=False, j=job.nulljob): no_field_order=False, j=job.nulljob):
"""Returns a list of :class:`Match` within ``objects`` after fuzzily matching their words. """Returns a list of :class:`Match` within ``objects`` after fuzzily matching their words.
:param objects: List of :class:`~core.fs.File` to match. :param objects: List of :class:`~core.fs.File` to match.
:param int min_match_percentage: minimum % of words that have to match. :param int min_match_percentage: minimum % of words that have to match.
:param bool match_similar_words: make similar words (see :func:`merge_similar_words`) match. :param bool match_similar_words: make similar words (see :func:`merge_similar_words`) match.
@@ -246,7 +248,7 @@ def getmatches(
def getmatches_by_contents(files, sizeattr='size', partial=False, j=job.nulljob): def getmatches_by_contents(files, sizeattr='size', partial=False, j=job.nulljob):
"""Returns a list of :class:`Match` within ``files`` if their contents is the same. """Returns a list of :class:`Match` within ``files`` if their contents is the same.
:param str sizeattr: attibute name of the :class:`~core.fs.file` that returns the size of the :param str sizeattr: attibute name of the :class:`~core.fs.file` that returns the size of the
file to use for comparison. file to use for comparison.
:param bool partial: if true, will use the "md5partial" attribute instead of "md5" to compute :param bool partial: if true, will use the "md5partial" attribute instead of "md5" to compute
@@ -259,6 +261,7 @@ def getmatches_by_contents(files, sizeattr='size', partial=False, j=job.nulljob)
filesize = getattr(file, sizeattr) filesize = getattr(file, sizeattr)
if filesize: if filesize:
size2files[filesize].add(file) size2files[filesize].add(file)
del files
possible_matches = [files for files in size2files.values() if len(files) > 1] possible_matches = [files for files in size2files.values() if len(files) > 1]
del size2files del size2files
result = [] result = []
@@ -278,44 +281,44 @@ class Group:
This manages match pairs into groups and ensures that all files in the group match to each This manages match pairs into groups and ensures that all files in the group match to each
other. other.
.. attribute:: ref .. attribute:: ref
The "reference" file, which is the file among the group that isn't going to be deleted. The "reference" file, which is the file among the group that isn't going to be deleted.
.. attribute:: ordered .. attribute:: ordered
Ordered list of duplicates in the group (including the :attr:`ref`). Ordered list of duplicates in the group (including the :attr:`ref`).
.. attribute:: unordered .. attribute:: unordered
Set duplicates in the group (including the :attr:`ref`). Set duplicates in the group (including the :attr:`ref`).
.. attribute:: dupes .. attribute:: dupes
An ordered list of the group's duplicate, without :attr:`ref`. Equivalent to An ordered list of the group's duplicate, without :attr:`ref`. Equivalent to
``ordered[1:]`` ``ordered[1:]``
.. attribute:: percentage .. attribute:: percentage
Average match percentage of match pairs containing :attr:`ref`. Average match percentage of match pairs containing :attr:`ref`.
""" """
#---Override #---Override
def __init__(self): def __init__(self):
self._clear() self._clear()
def __contains__(self, item): def __contains__(self, item):
return item in self.unordered return item in self.unordered
def __getitem__(self, key): def __getitem__(self, key):
return self.ordered.__getitem__(key) return self.ordered.__getitem__(key)
def __iter__(self): def __iter__(self):
return iter(self.ordered) return iter(self.ordered)
def __len__(self): def __len__(self):
return len(self.ordered) return len(self.ordered)
#---Private #---Private
def _clear(self): def _clear(self):
self._percentage = None self._percentage = None
@@ -324,22 +327,22 @@ class Group:
self.candidates = defaultdict(set) self.candidates = defaultdict(set)
self.ordered = [] self.ordered = []
self.unordered = set() self.unordered = set()
def _get_matches_for_ref(self): def _get_matches_for_ref(self):
if self._matches_for_ref is None: if self._matches_for_ref is None:
ref = self.ref ref = self.ref
self._matches_for_ref = [match for match in self.matches if ref in match] self._matches_for_ref = [match for match in self.matches if ref in match]
return self._matches_for_ref return self._matches_for_ref
#---Public #---Public
def add_match(self, match): def add_match(self, match):
"""Adds ``match`` to internal match list and possibly add duplicates to the group. """Adds ``match`` to internal match list and possibly add duplicates to the group.
A duplicate can only be considered as such if it matches all other duplicates in the group. A duplicate can only be considered as such if it matches all other duplicates in the group.
This method registers that pair (A, B) represented in ``match`` as possible candidates and, This method registers that pair (A, B) represented in ``match`` as possible candidates and,
if A and/or B end up matching every other duplicates in the group, add these duplicates to if A and/or B end up matching every other duplicates in the group, add these duplicates to
the group. the group.
:param tuple match: pair of :class:`~core.fs.File` to add :param tuple match: pair of :class:`~core.fs.File` to add
""" """
def add_candidate(item, match): def add_candidate(item, match):
@@ -348,7 +351,7 @@ class Group:
if self.unordered <= matches: if self.unordered <= matches:
self.ordered.append(item) self.ordered.append(item)
self.unordered.add(item) self.unordered.add(item)
if match in self.matches: if match in self.matches:
return return
self.matches.add(match) self.matches.add(match)
@@ -359,17 +362,17 @@ class Group:
add_candidate(second, first) add_candidate(second, first)
self._percentage = None self._percentage = None
self._matches_for_ref = None self._matches_for_ref = None
def discard_matches(self): def discard_matches(self):
"""Remove all recorded matches that didn't result in a duplicate being added to the group. """Remove all recorded matches that didn't result in a duplicate being added to the group.
You can call this after the duplicate scanning process to free a bit of memory. You can call this after the duplicate scanning process to free a bit of memory.
""" """
discarded = set(m for m in self.matches if not all(obj in self.unordered for obj in [m.first, m.second])) discarded = set(m for m in self.matches if not all(obj in self.unordered for obj in [m.first, m.second]))
self.matches -= discarded self.matches -= discarded
self.candidates = defaultdict(set) self.candidates = defaultdict(set)
return discarded return discarded
def get_match_of(self, item): def get_match_of(self, item):
"""Returns the match pair between ``item`` and :attr:`ref`. """Returns the match pair between ``item`` and :attr:`ref`.
""" """
@@ -378,10 +381,10 @@ class Group:
for m in self._get_matches_for_ref(): for m in self._get_matches_for_ref():
if item in m: if item in m:
return m return m
def prioritize(self, key_func, tie_breaker=None): def prioritize(self, key_func, tie_breaker=None):
"""Reorders :attr:`ordered` according to ``key_func``. """Reorders :attr:`ordered` according to ``key_func``.
:param key_func: Key (f(x)) to be used for sorting :param key_func: Key (f(x)) to be used for sorting
:param tie_breaker: function to be used to select the reference position in case the top :param tie_breaker: function to be used to select the reference position in case the top
duplicates have the same key_func() result. duplicates have the same key_func() result.
@@ -405,7 +408,7 @@ class Group:
self.switch_ref(ref) self.switch_ref(ref)
return True return True
return changed return changed
def remove_dupe(self, item, discard_matches=True): def remove_dupe(self, item, discard_matches=True):
try: try:
self.ordered.remove(item) self.ordered.remove(item)
@@ -419,7 +422,7 @@ class Group:
self._clear() self._clear()
except ValueError: except ValueError:
pass pass
def switch_ref(self, with_dupe): def switch_ref(self, with_dupe):
"""Make the :attr:`ref` dupe of the group switch position with ``with_dupe``. """Make the :attr:`ref` dupe of the group switch position with ``with_dupe``.
""" """
@@ -433,9 +436,9 @@ class Group:
return True return True
except ValueError: except ValueError:
return False return False
dupes = property(lambda self: self[1:]) dupes = property(lambda self: self[1:])
@property @property
def percentage(self): def percentage(self):
if self._percentage is None: if self._percentage is None:
@@ -445,16 +448,16 @@ class Group:
else: else:
self._percentage = 0 self._percentage = 0
return self._percentage return self._percentage
@property @property
def ref(self): def ref(self):
if self: if self:
return self[0] return self[0]
def get_groups(matches, j=job.nulljob): def get_groups(matches, j=job.nulljob):
"""Returns a list of :class:`Group` from ``matches``. """Returns a list of :class:`Group` from ``matches``.
Create groups out of match pairs in the smartest way possible. Create groups out of match pairs in the smartest way possible.
""" """
matches.sort(key=lambda match: -match.percentage) matches.sort(key=lambda match: -match.percentage)
@@ -495,7 +498,10 @@ def get_groups(matches, j=job.nulljob):
matched_files = set(flatten(groups)) matched_files = set(flatten(groups))
orphan_matches = [] orphan_matches = []
for group in groups: for group in groups:
orphan_matches += set(m for m in group.discard_matches() if not any(obj in matched_files for obj in [m.first, m.second])) orphan_matches += {
m for m in group.discard_matches()
if not any(obj in matched_files for obj in [m.first, m.second])
}
if groups and orphan_matches: if groups and orphan_matches:
groups += get_groups(orphan_matches) # no job, as it isn't supposed to take a long time groups += get_groups(orphan_matches) # no job, as it isn't supposed to take a long time
return groups return groups

View File

@@ -1,9 +1,9 @@
# Created By: Virgil Dupras # Created By: Virgil Dupras
# Created On: 2006/09/16 # Created On: 2006/09/16
# Copyright 2013 Hardcoded Software (http://www.hardcoded.net) # Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
# #
# This software is licensed under the "BSD" License as described in the "LICENSE" file, # This software is licensed under the "BSD" License as described in the "LICENSE" file,
# which should be included with this package. The terms are also available at # which should be included with this package. The terms are also available at
# http://www.hardcoded.net/licenses/bsd_license # http://www.hardcoded.net/licenses/bsd_license
import os.path as op import os.path as op
@@ -19,56 +19,56 @@ MAIN_TEMPLATE = """
<html xmlns="http://www.w3.org/1999/xhtml"> <html xmlns="http://www.w3.org/1999/xhtml">
<head> <head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type"/> <meta content="text/html; charset=utf-8" http-equiv="Content-Type"/>
<title>dupeGuru Results</title> <title>dupeGuru Results</title>
<style type="text/css"> <style type="text/css">
BODY BODY
{ {
background-color:white; background-color:white;
} }
BODY,A,P,UL,TABLE,TR,TD BODY,A,P,UL,TABLE,TR,TD
{ {
font-family:Tahoma,Arial,sans-serif; font-family:Tahoma,Arial,sans-serif;
font-size:10pt; font-size:10pt;
color: #4477AA; color: #4477AA;
} }
TABLE TABLE
{ {
background-color: #225588; background-color: #225588;
margin-left: auto; margin-left: auto;
margin-right: auto; margin-right: auto;
width: 90%; width: 90%;
} }
TR TR
{ {
background-color: white; background-color: white;
} }
TH TH
{ {
font-weight: bold; font-weight: bold;
color: black; color: black;
background-color: #C8D6E5; background-color: #C8D6E5;
} }
TH TD TH TD
{ {
color:black; color:black;
} }
TD TD
{ {
padding-left: 2pt; padding-left: 2pt;
} }
TD.rightelem TD.rightelem
{ {
text-align:right; text-align:right;
/*padding-left:0pt;*/ /*padding-left:0pt;*/
padding-right: 2pt; padding-right: 2pt;
width: 17%; width: 17%;
} }
TD.indented TD.indented
@@ -78,19 +78,19 @@ TD.indented
H1 H1
{ {
font-family:&quot;Courier New&quot;,monospace; font-family:&quot;Courier New&quot;,monospace;
color:#6699CC; color:#6699CC;
font-size:18pt; font-size:18pt;
color:#6da500; color:#6da500;
border-color: #70A0CF; border-color: #70A0CF;
border-width: 1pt; border-width: 1pt;
border-style: solid; border-style: solid;
margin-top: 16pt; margin-top: 16pt;
margin-left: 5%; margin-left: 5%;
margin-right: 5%; margin-right: 5%;
padding-top: 2pt; padding-top: 2pt;
padding-bottom:2pt; padding-bottom:2pt;
text-align: center; text-align: center;
} }
</style> </style>
</head> </head>

View File

@@ -1,9 +1,9 @@
# Created By: Virgil Dupras # Created By: Virgil Dupras
# Created On: 2009-10-22 # Created On: 2009-10-22
# Copyright 2013 Hardcoded Software (http://www.hardcoded.net) # Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
# #
# This software is licensed under the "BSD" License as described in the "LICENSE" file, # This software is licensed under the "BSD" License as described in the "LICENSE" file,
# which should be included with this package. The terms are also available at # which should be included with this package. The terms are also available at
# http://www.hardcoded.net/licenses/bsd_license # http://www.hardcoded.net/licenses/bsd_license
# This is a fork from hsfs. The reason for this fork is that hsfs has been designed for musicGuru # This is a fork from hsfs. The reason for this fork is that hsfs has been designed for musicGuru
@@ -32,6 +32,7 @@ NOT_SET = object()
class FSError(Exception): class FSError(Exception):
cls_message = "An error has occured on '{name}' in '{parent}'" cls_message = "An error has occured on '{name}' in '{parent}'"
def __init__(self, fsobject, parent=None): def __init__(self, fsobject, parent=None):
message = self.cls_message message = self.cls_message
if isinstance(fsobject, str): if isinstance(fsobject, str):
@@ -42,7 +43,7 @@ class FSError(Exception):
name = '' name = ''
parentname = str(parent) if parent is not None else '' parentname = str(parent) if parent is not None else ''
Exception.__init__(self, message.format(name=name, parent=parentname)) Exception.__init__(self, message.format(name=name, parent=parentname))
class AlreadyExistsError(FSError): class AlreadyExistsError(FSError):
"The directory or file name we're trying to add already exists" "The directory or file name we're trying to add already exists"
@@ -57,7 +58,7 @@ class InvalidDestinationError(FSError):
cls_message = "'{name}' is an invalid destination for this operation." cls_message = "'{name}' is an invalid destination for this operation."
class OperationError(FSError): class OperationError(FSError):
"""A copy/move/delete operation has been called, but the checkup after the """A copy/move/delete operation has been called, but the checkup after the
operation shows that it didn't work.""" operation shows that it didn't work."""
cls_message = "Operation on '{name}' failed." cls_message = "Operation on '{name}' failed."
@@ -74,15 +75,15 @@ class File:
# files, I saved 35% memory usage with "unread" files (no _read_info() call) and gains become # files, I saved 35% memory usage with "unread" files (no _read_info() call) and gains become
# even greater when we take into account read attributes (70%!). Yeah, it's worth it. # even greater when we take into account read attributes (70%!). Yeah, it's worth it.
__slots__ = ('path', 'is_ref', 'words') + tuple(INITIAL_INFO.keys()) __slots__ = ('path', 'is_ref', 'words') + tuple(INITIAL_INFO.keys())
def __init__(self, path): def __init__(self, path):
self.path = path self.path = path
for attrname in self.INITIAL_INFO: for attrname in self.INITIAL_INFO:
setattr(self, attrname, NOT_SET) setattr(self, attrname, NOT_SET)
def __repr__(self): def __repr__(self):
return "<{} {}>".format(self.__class__.__name__, str(self.path)) return "<{} {}>".format(self.__class__.__name__, str(self.path))
def __getattribute__(self, attrname): def __getattribute__(self, attrname):
result = object.__getattribute__(self, attrname) result = object.__getattribute__(self, attrname)
if result is NOT_SET: if result is NOT_SET:
@@ -94,12 +95,12 @@ class File:
if result is NOT_SET: if result is NOT_SET:
result = self.INITIAL_INFO[attrname] result = self.INITIAL_INFO[attrname]
return result return result
#This offset is where we should start reading the file to get a partial md5 #This offset is where we should start reading the file to get a partial md5
#For audio file, it should be where audio data starts #For audio file, it should be where audio data starts
def _get_md5partial_offset_and_size(self): def _get_md5partial_offset_and_size(self):
return (0x4000, 0x4000) #16Kb return (0x4000, 0x4000) #16Kb
def _read_info(self, field): def _read_info(self, field):
if field in ('size', 'mtime'): if field in ('size', 'mtime'):
stats = self.path.stat() stats = self.path.stat()
@@ -129,24 +130,24 @@ class File:
fp.close() fp.close()
except Exception: except Exception:
pass pass
def _read_all_info(self, attrnames=None): def _read_all_info(self, attrnames=None):
"""Cache all possible info. """Cache all possible info.
If `attrnames` is not None, caches only attrnames. If `attrnames` is not None, caches only attrnames.
""" """
if attrnames is None: if attrnames is None:
attrnames = self.INITIAL_INFO.keys() attrnames = self.INITIAL_INFO.keys()
for attrname in attrnames: for attrname in attrnames:
getattr(self, attrname) getattr(self, attrname)
#--- Public #--- Public
@classmethod @classmethod
def can_handle(cls, path): def can_handle(cls, path):
"""Returns whether this file wrapper class can handle ``path``. """Returns whether this file wrapper class can handle ``path``.
""" """
return not path.islink() and path.isfile() return not path.islink() and path.isfile()
def rename(self, newname): def rename(self, newname):
if newname == self.name: if newname == self.name:
return return
@@ -160,42 +161,42 @@ class File:
if not destpath.exists(): if not destpath.exists():
raise OperationError(self) raise OperationError(self)
self.path = destpath self.path = destpath
def get_display_info(self, group, delta): def get_display_info(self, group, delta):
"""Returns a display-ready dict of dupe's data. """Returns a display-ready dict of dupe's data.
""" """
raise NotImplementedError() raise NotImplementedError()
#--- Properties #--- Properties
@property @property
def extension(self): def extension(self):
return get_file_ext(self.name) return get_file_ext(self.name)
@property @property
def name(self): def name(self):
return self.path.name return self.path.name
@property @property
def folder_path(self): def folder_path(self):
return self.path.parent() return self.path.parent()
class Folder(File): class Folder(File):
"""A wrapper around a folder path. """A wrapper around a folder path.
It has the size/md5 info of a File, but it's value are the sum of its subitems. It has the size/md5 info of a File, but it's value are the sum of its subitems.
""" """
__slots__ = File.__slots__ + ('_subfolders', ) __slots__ = File.__slots__ + ('_subfolders', )
def __init__(self, path): def __init__(self, path):
File.__init__(self, path) File.__init__(self, path)
self._subfolders = None self._subfolders = None
def _all_items(self): def _all_items(self):
folders = self.subfolders folders = self.subfolders
files = get_files(self.path) files = get_files(self.path)
return folders + files return folders + files
def _read_info(self, field): def _read_info(self, field):
if field in {'size', 'mtime'}: if field in {'size', 'mtime'}:
size = sum((f.size for f in self._all_items()), 0) size = sum((f.size for f in self._all_items()), 0)
@@ -208,31 +209,31 @@ class Folder(File):
# different md5 if a file gets moved in a different subdirectory. # different md5 if a file gets moved in a different subdirectory.
def get_dir_md5_concat(): def get_dir_md5_concat():
items = self._all_items() items = self._all_items()
items.sort(key=lambda f:f.path) items.sort(key=lambda f: f.path)
md5s = [getattr(f, field) for f in items] md5s = [getattr(f, field) for f in items]
return b''.join(md5s) return b''.join(md5s)
md5 = hashlib.md5(get_dir_md5_concat()) md5 = hashlib.md5(get_dir_md5_concat())
digest = md5.digest() digest = md5.digest()
setattr(self, field, digest) setattr(self, field, digest)
@property @property
def subfolders(self): def subfolders(self):
if self._subfolders is None: if self._subfolders is None:
subfolders = [p for p in self.path.listdir() if not p.islink() and p.isdir()] subfolders = [p for p in self.path.listdir() if not p.islink() and p.isdir()]
self._subfolders = [self.__class__(p) for p in subfolders] self._subfolders = [self.__class__(p) for p in subfolders]
return self._subfolders return self._subfolders
@classmethod @classmethod
def can_handle(cls, path): def can_handle(cls, path):
return not path.islink() and path.isdir() return not path.islink() and path.isdir()
def get_file(path, fileclasses=[File]): def get_file(path, fileclasses=[File]):
"""Wraps ``path`` around its appropriate :class:`File` class. """Wraps ``path`` around its appropriate :class:`File` class.
Whether a class is "appropriate" is decided by :meth:`File.can_handle` Whether a class is "appropriate" is decided by :meth:`File.can_handle`
:param Path path: path to wrap :param Path path: path to wrap
:param fileclasses: List of candidate :class:`File` classes :param fileclasses: List of candidate :class:`File` classes
""" """
@@ -242,7 +243,7 @@ def get_file(path, fileclasses=[File]):
def get_files(path, fileclasses=[File]): def get_files(path, fileclasses=[File]):
"""Returns a list of :class:`File` for each file contained in ``path``. """Returns a list of :class:`File` for each file contained in ``path``.
:param Path path: path to scan :param Path path: path to scan
:param fileclasses: List of candidate :class:`File` classes :param fileclasses: List of candidate :class:`File` classes
""" """

View File

@@ -12,4 +12,5 @@ either Cocoa's ``NSTableView`` or Qt's ``QTableView``. It tells them which cell
blue, which is supposed to be orange, does the sorting logic, holds selection, etc.. blue, which is supposed to be orange, does the sorting logic, holds selection, etc..
.. _cross-toolkit: http://www.hardcoded.net/articles/cross-toolkit-software .. _cross-toolkit: http://www.hardcoded.net/articles/cross-toolkit-software
""" """

View File

@@ -1,31 +1,30 @@
# Created By: Virgil Dupras # Created By: Virgil Dupras
# Created On: 2010-02-06 # Created On: 2010-02-06
# Copyright 2013 Hardcoded Software (http://www.hardcoded.net) # Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
# #
# This software is licensed under the "BSD" License as described in the "LICENSE" file, # This software is licensed under the "BSD" License as described in the "LICENSE" file,
# which should be included with this package. The terms are also available at # which should be included with this package. The terms are also available at
# http://www.hardcoded.net/licenses/bsd_license # http://www.hardcoded.net/licenses/bsd_license
from hscommon.notify import Listener from hscommon.notify import Listener
from hscommon.gui.base import NoopGUI
class DupeGuruGUIObject(Listener): class DupeGuruGUIObject(Listener):
def __init__(self, app): def __init__(self, app):
Listener.__init__(self, app) Listener.__init__(self, app)
self.app = app self.app = app
def directories_changed(self): def directories_changed(self):
pass pass
def dupes_selected(self): def dupes_selected(self):
pass pass
def marking_changed(self): def marking_changed(self):
pass pass
def results_changed(self): def results_changed(self):
pass pass
def results_changed_but_keep_selection(self): def results_changed_but_keep_selection(self):
pass pass

View File

@@ -1,5 +1,5 @@
# Created On: 2012-05-30 # Created On: 2012-05-30
# Copyright 2013 Hardcoded Software (http://www.hardcoded.net) # Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
# #
# This software is licensed under the "BSD" License as described in the "LICENSE" file, # This software is licensed under the "BSD" License as described in the "LICENSE" file,
# which should be included with this package. The terms are also available at # which should be included with this package. The terms are also available at

View File

@@ -1,6 +1,6 @@
# Created By: Virgil Dupras # Created By: Virgil Dupras
# Created On: 2010-02-05 # Created On: 2010-02-05
# Copyright 2013 Hardcoded Software (http://www.hardcoded.net) # Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
# #
# This software is licensed under the "BSD" License as described in the "LICENSE" file, # This software is licensed under the "BSD" License as described in the "LICENSE" file,
# which should be included with this package. The terms are also available at # which should be included with this package. The terms are also available at

View File

@@ -1,6 +1,6 @@
# Created By: Virgil Dupras # Created By: Virgil Dupras
# Created On: 2010-02-06 # Created On: 2010-02-06
# Copyright 2013 Hardcoded Software (http://www.hardcoded.net) # Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
# #
# This software is licensed under the "BSD" License as described in the "LICENSE" file, # This software is licensed under the "BSD" License as described in the "LICENSE" file,
# which should be included with this package. The terms are also available at # which should be included with this package. The terms are also available at

View File

@@ -1,5 +1,5 @@
# Created On: 2012/03/13 # Created On: 2012/03/13
# Copyright 2013 Hardcoded Software (http://www.hardcoded.net) # Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
# #
# This software is licensed under the "BSD" License as described in the "LICENSE" file, # This software is licensed under the "BSD" License as described in the "LICENSE" file,
# which should be included with this package. The terms are also available at # which should be included with this package. The terms are also available at

View File

@@ -1,6 +1,6 @@
# Created By: Virgil Dupras # Created By: Virgil Dupras
# Created On: 2012-03-13 # Created On: 2012-03-13
# Copyright 2013 Hardcoded Software (http://www.hardcoded.net) # Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
# #
# This software is licensed under the "BSD" License as described in the "LICENSE" file, # This software is licensed under the "BSD" License as described in the "LICENSE" file,
# which should be included with this package. The terms are also available at # which should be included with this package. The terms are also available at

View File

@@ -1,9 +1,9 @@
# Created By: Virgil Dupras # Created By: Virgil Dupras
# Created On: 2011-09-06 # Created On: 2011-09-06
# Copyright 2013 Hardcoded Software (http://www.hardcoded.net) # Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
# #
# This software is licensed under the "BSD" License as described in the "LICENSE" file, # This software is licensed under the "BSD" License as described in the "LICENSE" file,
# which should be included with this package. The terms are also available at # which should be included with this package. The terms are also available at
# http://www.hardcoded.net/licenses/bsd_license # http://www.hardcoded.net/licenses/bsd_license
from hscommon.gui.base import GUIObject from hscommon.gui.base import GUIObject
@@ -13,7 +13,7 @@ class CriterionCategoryList(GUISelectableList):
def __init__(self, dialog): def __init__(self, dialog):
self.dialog = dialog self.dialog = dialog
GUISelectableList.__init__(self, [c.NAME for c in dialog.categories]) GUISelectableList.__init__(self, [c.NAME for c in dialog.categories])
def _update_selection(self): def _update_selection(self):
self.dialog.select_category(self.dialog.categories[self.selected_index]) self.dialog.select_category(self.dialog.categories[self.selected_index])
GUISelectableList._update_selection(self) GUISelectableList._update_selection(self)
@@ -22,10 +22,10 @@ class PrioritizationList(GUISelectableList):
def __init__(self, dialog): def __init__(self, dialog):
self.dialog = dialog self.dialog = dialog
GUISelectableList.__init__(self) GUISelectableList.__init__(self)
def _refresh_contents(self): def _refresh_contents(self):
self[:] = [crit.display for crit in self.dialog.prioritizations] self[:] = [crit.display for crit in self.dialog.prioritizations]
def move_indexes(self, indexes, dest_index): def move_indexes(self, indexes, dest_index):
indexes.sort() indexes.sort()
prilist = self.dialog.prioritizations prilist = self.dialog.prioritizations
@@ -34,7 +34,7 @@ class PrioritizationList(GUISelectableList):
del prilist[i] del prilist[i]
prilist[dest_index:dest_index] = selected prilist[dest_index:dest_index] = selected
self._refresh_contents() self._refresh_contents()
def remove_selected(self): def remove_selected(self):
prilist = self.dialog.prioritizations prilist = self.dialog.prioritizations
for i in sorted(self.selected_indexes, reverse=True): for i in sorted(self.selected_indexes, reverse=True):
@@ -51,15 +51,15 @@ class PrioritizeDialog(GUIObject):
self.criteria_list = GUISelectableList() self.criteria_list = GUISelectableList()
self.prioritizations = [] self.prioritizations = []
self.prioritization_list = PrioritizationList(self) self.prioritization_list = PrioritizationList(self)
#--- Override #--- Override
def _view_updated(self): def _view_updated(self):
self.category_list.select(0) self.category_list.select(0)
#--- Private #--- Private
def _sort_key(self, dupe): def _sort_key(self, dupe):
return tuple(crit.sort_key(dupe) for crit in self.prioritizations) return tuple(crit.sort_key(dupe) for crit in self.prioritizations)
#--- Public #--- Public
def select_category(self, category): def select_category(self, category):
self.criteria = category.criteria_list() self.criteria = category.criteria_list()
@@ -71,10 +71,11 @@ class PrioritizeDialog(GUIObject):
return return
crit = self.criteria[self.criteria_list.selected_index] crit = self.criteria[self.criteria_list.selected_index]
self.prioritizations.append(crit) self.prioritizations.append(crit)
del crit
self.prioritization_list[:] = [crit.display for crit in self.prioritizations] self.prioritization_list[:] = [crit.display for crit in self.prioritizations]
def remove_selected(self): def remove_selected(self):
self.prioritization_list.remove_selected() self.prioritization_list.remove_selected()
def perform_reprioritization(self): def perform_reprioritization(self):
self.app.reprioritize_groups(self._sort_key) self.app.reprioritize_groups(self._sort_key)

View File

@@ -1,6 +1,6 @@
# Created By: Virgil Dupras # Created By: Virgil Dupras
# Created On: 2010-04-12 # Created On: 2010-04-12
# Copyright 2013 Hardcoded Software (http://www.hardcoded.net) # Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
# #
# This software is licensed under the "BSD" License as described in the "LICENSE" file, # This software is licensed under the "BSD" License as described in the "LICENSE" file,
# which should be included with this package. The terms are also available at # which should be included with this package. The terms are also available at

View File

@@ -1,6 +1,6 @@
# Created By: Virgil Dupras # Created By: Virgil Dupras
# Created On: 2010-04-12 # Created On: 2010-04-12
# Copyright 2013 Hardcoded Software (http://www.hardcoded.net) # Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
# #
# This software is licensed under the "BSD" License as described in the "LICENSE" file, # This software is licensed under the "BSD" License as described in the "LICENSE" file,
# which should be included with this package. The terms are also available at # which should be included with this package. The terms are also available at

View File

@@ -1,6 +1,6 @@
# Created By: Virgil Dupras # Created By: Virgil Dupras
# Created On: 2010-02-11 # Created On: 2010-02-11
# Copyright 2013 Hardcoded Software (http://www.hardcoded.net) # Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
# #
# This software is licensed under the "BSD" License as described in the "LICENSE" file, # This software is licensed under the "BSD" License as described in the "LICENSE" file,
# which should be included with this package. The terms are also available at # which should be included with this package. The terms are also available at

View File

@@ -1,6 +1,6 @@
# Created By: Virgil Dupras # Created By: Virgil Dupras
# Created On: 2010-02-11 # Created On: 2010-02-11
# Copyright 2013 Hardcoded Software (http://www.hardcoded.net) # Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
# #
# This software is licensed under the "BSD" License as described in the "LICENSE" file, # This software is licensed under the "BSD" License as described in the "LICENSE" file,
# which should be included with this package. The terms are also available at # which should be included with this package. The terms are also available at

View File

@@ -1,9 +1,9 @@
# Created By: Virgil Dupras # Created By: Virgil Dupras
# Created On: 2006/05/02 # Created On: 2006/05/02
# Copyright 2013 Hardcoded Software (http://www.hardcoded.net) # Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
# #
# This software is licensed under the "BSD" License as described in the "LICENSE" file, # This software is licensed under the "BSD" License as described in the "LICENSE" file,
# which should be included with this package. The terms are also available at # which should be included with this package. The terms are also available at
# http://www.hardcoded.net/licenses/bsd_license # http://www.hardcoded.net/licenses/bsd_license
from xml.etree import ElementTree as ET from xml.etree import ElementTree as ET
@@ -12,7 +12,7 @@ from hscommon.util import FileOrPath
class IgnoreList: class IgnoreList:
"""An ignore list implementation that is iterable, filterable and exportable to XML. """An ignore list implementation that is iterable, filterable and exportable to XML.
Call Ignore to add an ignore list entry, and AreIgnore to check if 2 items are in the list. Call Ignore to add an ignore list entry, and AreIgnore to check if 2 items are in the list.
When iterated, 2 sized tuples will be returned, the tuples containing 2 items ignored together. When iterated, 2 sized tuples will be returned, the tuples containing 2 items ignored together.
""" """
@@ -20,43 +20,43 @@ class IgnoreList:
def __init__(self): def __init__(self):
self._ignored = {} self._ignored = {}
self._count = 0 self._count = 0
def __iter__(self): def __iter__(self):
for first,seconds in self._ignored.items(): for first, seconds in self._ignored.items():
for second in seconds: for second in seconds:
yield (first,second) yield (first, second)
def __len__(self): def __len__(self):
return self._count return self._count
#---Public #---Public
def AreIgnored(self,first,second): def AreIgnored(self, first, second):
def do_check(first,second): def do_check(first, second):
try: try:
matches = self._ignored[first] matches = self._ignored[first]
return second in matches return second in matches
except KeyError: except KeyError:
return False return False
return do_check(first,second) or do_check(second,first) return do_check(first, second) or do_check(second, first)
def Clear(self): def Clear(self):
self._ignored = {} self._ignored = {}
self._count = 0 self._count = 0
def Filter(self,func): def Filter(self, func):
"""Applies a filter on all ignored items, and remove all matches where func(first,second) """Applies a filter on all ignored items, and remove all matches where func(first,second)
doesn't return True. doesn't return True.
""" """
filtered = IgnoreList() filtered = IgnoreList()
for first,second in self: for first, second in self:
if func(first,second): if func(first, second):
filtered.Ignore(first,second) filtered.Ignore(first, second)
self._ignored = filtered._ignored self._ignored = filtered._ignored
self._count = filtered._count self._count = filtered._count
def Ignore(self,first,second): def Ignore(self, first, second):
if self.AreIgnored(first,second): if self.AreIgnored(first, second):
return return
try: try:
matches = self._ignored[first] matches = self._ignored[first]
@@ -70,7 +70,7 @@ class IgnoreList:
matches.add(second) matches.add(second)
self._ignored[first] = matches self._ignored[first] = matches
self._count += 1 self._count += 1
def remove(self, first, second): def remove(self, first, second):
def inner(first, second): def inner(first, second):
try: try:
@@ -85,14 +85,14 @@ class IgnoreList:
return False return False
except KeyError: except KeyError:
return False return False
if not inner(first, second): if not inner(first, second):
if not inner(second, first): if not inner(second, first):
raise ValueError() raise ValueError()
def load_from_xml(self, infile): def load_from_xml(self, infile):
"""Loads the ignore list from a XML created with save_to_xml. """Loads the ignore list from a XML created with save_to_xml.
infile can be a file object or a filename. infile can be a file object or a filename.
""" """
try: try:
@@ -109,10 +109,10 @@ class IgnoreList:
subfile_path = sfn.get('path') subfile_path = sfn.get('path')
if subfile_path: if subfile_path:
self.Ignore(file_path, subfile_path) self.Ignore(file_path, subfile_path)
def save_to_xml(self, outfile): def save_to_xml(self, outfile):
"""Create a XML file that can be used by load_from_xml. """Create a XML file that can be used by load_from_xml.
outfile can be a file object or a filename. outfile can be a file object or a filename.
""" """
root = ET.Element('ignore_list') root = ET.Element('ignore_list')
@@ -125,5 +125,5 @@ class IgnoreList:
tree = ET.ElementTree(root) tree = ET.ElementTree(root)
with FileOrPath(outfile, 'wb') as fp: with FileOrPath(outfile, 'wb') as fp:
tree.write(fp, encoding='utf-8') tree.write(fp, encoding='utf-8')

View File

@@ -1,6 +1,6 @@
# Created By: Virgil Dupras # Created By: Virgil Dupras
# Created On: 2006/02/23 # Created On: 2006/02/23
# Copyright 2013 Hardcoded Software (http://www.hardcoded.net) # Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
# This software is licensed under the "BSD" License as described in the "LICENSE" file, # This software is licensed under the "BSD" License as described in the "LICENSE" file,
# which should be included with this package. The terms are also available at # which should be included with this package. The terms are also available at

View File

@@ -1,6 +1,6 @@
# Created By: Virgil Dupras # Created By: Virgil Dupras
# Created On: 2011/09/07 # Created On: 2011/09/07
# Copyright 2013 Hardcoded Software (http://www.hardcoded.net) # Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
# #
# This software is licensed under the "BSD" License as described in the "LICENSE" file, # This software is licensed under the "BSD" License as described in the "LICENSE" file,
# which should be included with this package. The terms are also available at # which should be included with this package. The terms are also available at

View File

@@ -1,9 +1,9 @@
# Created By: Virgil Dupras # Created By: Virgil Dupras
# Created On: 2006/02/23 # Created On: 2006/02/23
# Copyright 2013 Hardcoded Software (http://www.hardcoded.net) # Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
# #
# This software is licensed under the "BSD" License as described in the "LICENSE" file, # This software is licensed under the "BSD" License as described in the "LICENSE" file,
# which should be included with this package. The terms are also available at # which should be included with this package. The terms are also available at
# http://www.hardcoded.net/licenses/bsd_license # http://www.hardcoded.net/licenses/bsd_license
import logging import logging
@@ -12,7 +12,7 @@ import os
import os.path as op import os.path as op
from xml.etree import ElementTree as ET from xml.etree import ElementTree as ET
from jobprogress.job import nulljob from hscommon.jobprogress.job import nulljob
from hscommon.conflict import get_conflicted_name from hscommon.conflict import get_conflicted_name
from hscommon.util import flatten, nonone, FileOrPath, format_size from hscommon.util import flatten, nonone, FileOrPath, format_size
from hscommon.trans import tr from hscommon.trans import tr
@@ -22,15 +22,15 @@ from .markable import Markable
class Results(Markable): class Results(Markable):
"""Manages a collection of duplicate :class:`~core.engine.Group`. """Manages a collection of duplicate :class:`~core.engine.Group`.
This class takes care or marking, sorting and filtering duplicate groups. This class takes care or marking, sorting and filtering duplicate groups.
.. attribute:: groups .. attribute:: groups
The list of :class:`~core.engine.Group` contained managed by this instance. The list of :class:`~core.engine.Group` contained managed by this instance.
.. attribute:: dupes .. attribute:: dupes
A list of all duplicates (:class:`~core.fs.File` instances), without ref, contained in the A list of all duplicates (:class:`~core.fs.File` instances), without ref, contained in the
currently managed :attr:`groups`. currently managed :attr:`groups`.
""" """
@@ -50,16 +50,16 @@ class Results(Markable):
self.app = app self.app = app
self.problems = [] # (dupe, error_msg) self.problems = [] # (dupe, error_msg)
self.is_modified = False self.is_modified = False
def _did_mark(self, dupe): def _did_mark(self, dupe):
self.__marked_size += dupe.size self.__marked_size += dupe.size
def _did_unmark(self, dupe): def _did_unmark(self, dupe):
self.__marked_size -= dupe.size self.__marked_size -= dupe.size
def _get_markable_count(self): def _get_markable_count(self):
return self.__total_count return self.__total_count
def _is_markable(self, dupe): def _is_markable(self, dupe):
if dupe.is_ref: if dupe.is_ref:
return False return False
@@ -71,45 +71,48 @@ class Results(Markable):
if self.__filtered_dupes and dupe not in self.__filtered_dupes: if self.__filtered_dupes and dupe not in self.__filtered_dupes:
return False return False
return True return True
def mark_all(self): def mark_all(self):
if self.__filters: if self.__filters:
self.mark_multiple(self.__filtered_dupes) self.mark_multiple(self.__filtered_dupes)
else: else:
Markable.mark_all(self) Markable.mark_all(self)
def mark_invert(self): def mark_invert(self):
if self.__filters: if self.__filters:
self.mark_toggle_multiple(self.__filtered_dupes) self.mark_toggle_multiple(self.__filtered_dupes)
else: else:
Markable.mark_invert(self) Markable.mark_invert(self)
def mark_none(self): def mark_none(self):
if self.__filters: if self.__filters:
self.unmark_multiple(self.__filtered_dupes) self.unmark_multiple(self.__filtered_dupes)
else: else:
Markable.mark_none(self) Markable.mark_none(self)
#---Private #---Private
def __get_dupe_list(self): def __get_dupe_list(self):
if self.__dupes is None: if self.__dupes is None:
self.__dupes = flatten(group.dupes for group in self.groups) self.__dupes = flatten(group.dupes for group in self.groups)
if None in self.__dupes: if None in self.__dupes:
# This is debug logging to try to figure out #44 # This is debug logging to try to figure out #44
logging.warning("There is a None value in the Results' dupe list. dupes: %r groups: %r", self.__dupes, self.groups) logging.warning(
"There is a None value in the Results' dupe list. dupes: %r groups: %r",
self.__dupes, self.groups
)
if self.__filtered_dupes: if self.__filtered_dupes:
self.__dupes = [dupe for dupe in self.__dupes if dupe in self.__filtered_dupes] self.__dupes = [dupe for dupe in self.__dupes if dupe in self.__filtered_dupes]
sd = self.__dupes_sort_descriptor sd = self.__dupes_sort_descriptor
if sd: if sd:
self.sort_dupes(sd[0], sd[1], sd[2]) self.sort_dupes(sd[0], sd[1], sd[2])
return self.__dupes return self.__dupes
def __get_groups(self): def __get_groups(self):
if self.__filtered_groups is None: if self.__filtered_groups is None:
return self.__groups return self.__groups
else: else:
return self.__filtered_groups return self.__filtered_groups
def __get_stat_line(self): def __get_stat_line(self):
if self.__filtered_dupes is None: if self.__filtered_dupes is None:
mark_count = self.mark_count mark_count = self.mark_count
@@ -132,7 +135,7 @@ class Results(Markable):
if self.__filters: if self.__filters:
result += tr(" filter: %s") % ' --> '.join(self.__filters) result += tr(" filter: %s") % ' --> '.join(self.__filters)
return result return result
def __recalculate_stats(self): def __recalculate_stats(self):
self.__total_size = 0 self.__total_size = 0
self.__total_count = 0 self.__total_count = 0
@@ -140,7 +143,7 @@ class Results(Markable):
markable = [dupe for dupe in group.dupes if self._is_markable(dupe)] markable = [dupe for dupe in group.dupes if self._is_markable(dupe)]
self.__total_count += len(markable) self.__total_count += len(markable)
self.__total_size += sum(dupe.size for dupe in markable) self.__total_size += sum(dupe.size for dupe in markable)
def __set_groups(self, new_groups): def __set_groups(self, new_groups):
self.mark_none() self.mark_none()
self.__groups = new_groups self.__groups = new_groups
@@ -155,18 +158,18 @@ class Results(Markable):
self.apply_filter(None) self.apply_filter(None)
for filter_str in old_filters: for filter_str in old_filters:
self.apply_filter(filter_str) self.apply_filter(filter_str)
#---Public #---Public
def apply_filter(self, filter_str): def apply_filter(self, filter_str):
"""Applies a filter ``filter_str`` to :attr:`groups` """Applies a filter ``filter_str`` to :attr:`groups`
When you apply the filter, only dupes with the filename matching ``filter_str`` will be in When you apply the filter, only dupes with the filename matching ``filter_str`` will be in
in the results. To cancel the filter, just call apply_filter with ``filter_str`` to None, in the results. To cancel the filter, just call apply_filter with ``filter_str`` to None,
and the results will go back to normal. and the results will go back to normal.
If call apply_filter on a filtered results, the filter will be applied If call apply_filter on a filtered results, the filter will be applied
*on the filtered results*. *on the filtered results*.
:param str filter_str: a string containing a regexp to filter dupes with. :param str filter_str: a string containing a regexp to filter dupes with.
""" """
if not filter_str: if not filter_str:
@@ -193,7 +196,7 @@ class Results(Markable):
if sd: if sd:
self.sort_groups(sd[0], sd[1]) self.sort_groups(sd[0], sd[1])
self.__dupes = None self.__dupes = None
def get_group_of_duplicate(self, dupe): def get_group_of_duplicate(self, dupe):
"""Returns :class:`~core.engine.Group` in which ``dupe`` belongs. """Returns :class:`~core.engine.Group` in which ``dupe`` belongs.
""" """
@@ -201,12 +204,12 @@ class Results(Markable):
return self.__group_of_duplicate[dupe] return self.__group_of_duplicate[dupe]
except (TypeError, KeyError): except (TypeError, KeyError):
return None return None
is_markable = _is_markable is_markable = _is_markable
def load_from_xml(self, infile, get_file, j=nulljob): def load_from_xml(self, infile, get_file, j=nulljob):
"""Load results from ``infile``. """Load results from ``infile``.
:param infile: a file or path pointing to an XML file created with :meth:`save_to_xml`. :param infile: a file or path pointing to an XML file created with :meth:`save_to_xml`.
:param get_file: a function f(path) returning a :class:`~core.fs.File` wrapping the path. :param get_file: a function f(path) returning a :class:`~core.fs.File` wrapping the path.
:param j: A :ref:`job progress instance <jobs>`. :param j: A :ref:`job progress instance <jobs>`.
@@ -217,7 +220,7 @@ class Results(Markable):
for other_file in other_files: for other_file in other_files:
group.add_match(engine.get_match(ref_file, other_file)) group.add_match(engine.get_match(ref_file, other_file))
do_match(other_files[0], other_files[1:], group) do_match(other_files[0], other_files[1:], group)
self.apply_filter(None) self.apply_filter(None)
try: try:
root = ET.parse(infile).getroot() root = ET.parse(infile).getroot()
@@ -249,19 +252,20 @@ class Results(Markable):
second_file = dupes[int(attrs['second'])] second_file = dupes[int(attrs['second'])]
percentage = int(attrs['percentage']) percentage = int(attrs['percentage'])
group.add_match(engine.Match(first_file, second_file, percentage)) group.add_match(engine.Match(first_file, second_file, percentage))
except (IndexError, KeyError, ValueError): # Covers missing attr, non-int values and indexes out of bounds except (IndexError, KeyError, ValueError):
# Covers missing attr, non-int values and indexes out of bounds
pass pass
if (not group.matches) and (len(dupes) >= 2): if (not group.matches) and (len(dupes) >= 2):
do_match(dupes[0], dupes[1:], group) do_match(dupes[0], dupes[1:], group)
group.prioritize(lambda x: dupes.index(x)) group.prioritize(lambda x: dupes.index(x))
if len(group): if len(group):
groups.append(group) groups.append(group)
j.add_progress() j.add_progress()
self.groups = groups self.groups = groups
for dupe_file in marked: for dupe_file in marked:
self.mark(dupe_file) self.mark(dupe_file)
self.is_modified = False self.is_modified = False
def make_ref(self, dupe): def make_ref(self, dupe):
"""Make ``dupe`` take the :attr:`~core.engine.Group.ref` position of its group. """Make ``dupe`` take the :attr:`~core.engine.Group.ref` position of its group.
""" """
@@ -279,13 +283,13 @@ class Results(Markable):
self.__dupes = None self.__dupes = None
self.is_modified = True self.is_modified = True
return True return True
def perform_on_marked(self, func, remove_from_results): def perform_on_marked(self, func, remove_from_results):
"""Performs ``func`` on all marked dupes. """Performs ``func`` on all marked dupes.
If an ``EnvironmentError`` is raised during the call, the problematic dupe is added to If an ``EnvironmentError`` is raised during the call, the problematic dupe is added to
self.problems. self.problems.
:param bool remove_from_results: If true, dupes which had ``func`` applied and didn't cause :param bool remove_from_results: If true, dupes which had ``func`` applied and didn't cause
any problem. any problem.
""" """
@@ -303,10 +307,10 @@ class Results(Markable):
self.mark_none() self.mark_none()
for dupe, _ in self.problems: for dupe, _ in self.problems:
self.mark(dupe) self.mark(dupe)
def remove_duplicates(self, dupes): def remove_duplicates(self, dupes):
"""Remove ``dupes`` from their respective :class:`~core.engine.Group`. """Remove ``dupes`` from their respective :class:`~core.engine.Group`.
Also, remove the group from :attr:`groups` if it ends up empty. Also, remove the group from :attr:`groups` if it ends up empty.
""" """
affected_groups = set() affected_groups = set()
@@ -331,10 +335,10 @@ class Results(Markable):
group.discard_matches() group.discard_matches()
self.__dupes = None self.__dupes = None
self.is_modified = bool(self.__groups) self.is_modified = bool(self.__groups)
def save_to_xml(self, outfile): def save_to_xml(self, outfile):
"""Save results to ``outfile`` in XML. """Save results to ``outfile`` in XML.
:param outfile: file object or path. :param outfile: file object or path.
""" """
self.apply_filter(None) self.apply_filter(None)
@@ -362,11 +366,11 @@ class Results(Markable):
match_elem.set('second', str(dupe2index[match.second])) match_elem.set('second', str(dupe2index[match.second]))
match_elem.set('percentage', str(int(match.percentage))) match_elem.set('percentage', str(int(match.percentage)))
tree = ET.ElementTree(root) tree = ET.ElementTree(root)
def do_write(outfile): def do_write(outfile):
with FileOrPath(outfile, 'wb') as fp: with FileOrPath(outfile, 'wb') as fp:
tree.write(fp, encoding='utf-8') tree.write(fp, encoding='utf-8')
try: try:
do_write(outfile) do_write(outfile)
except IOError as e: except IOError as e:
@@ -381,10 +385,10 @@ class Results(Markable):
else: else:
raise raise
self.is_modified = False self.is_modified = False
def sort_dupes(self, key, asc=True, delta=False): def sort_dupes(self, key, asc=True, delta=False):
"""Sort :attr:`dupes` according to ``key``. """Sort :attr:`dupes` according to ``key``.
:param str key: key attribute name to sort with. :param str key: key attribute name to sort with.
:param bool asc: If false, sorting is reversed. :param bool asc: If false, sorting is reversed.
:param bool delta: If true, sorting occurs using :ref:`delta values <deltavalues>`. :param bool delta: If true, sorting occurs using :ref:`delta values <deltavalues>`.
@@ -393,21 +397,22 @@ class Results(Markable):
self.__get_dupe_list() self.__get_dupe_list()
keyfunc = lambda d: self.app._get_dupe_sort_key(d, lambda: self.get_group_of_duplicate(d), key, delta) keyfunc = lambda d: self.app._get_dupe_sort_key(d, lambda: self.get_group_of_duplicate(d), key, delta)
self.__dupes.sort(key=keyfunc, reverse=not asc) self.__dupes.sort(key=keyfunc, reverse=not asc)
self.__dupes_sort_descriptor = (key,asc,delta) self.__dupes_sort_descriptor = (key, asc, delta)
def sort_groups(self, key, asc=True): def sort_groups(self, key, asc=True):
"""Sort :attr:`groups` according to ``key``. """Sort :attr:`groups` according to ``key``.
The :attr:`~core.engine.Group.ref` of each group is used to extract values for sorting. The :attr:`~core.engine.Group.ref` of each group is used to extract values for sorting.
:param str key: key attribute name to sort with. :param str key: key attribute name to sort with.
:param bool asc: If false, sorting is reversed. :param bool asc: If false, sorting is reversed.
""" """
keyfunc = lambda g: self.app._get_group_sort_key(g, key) keyfunc = lambda g: self.app._get_group_sort_key(g, key)
self.groups.sort(key=keyfunc, reverse=not asc) self.groups.sort(key=keyfunc, reverse=not asc)
self.__groups_sort_descriptor = (key,asc) self.__groups_sort_descriptor = (key, asc)
#---Properties #---Properties
dupes = property(__get_dupe_list) dupes = property(__get_dupe_list)
groups = property(__get_groups, __set_groups) groups = property(__get_groups, __set_groups)
stat_line = property(__get_stat_line) stat_line = property(__get_stat_line)

View File

@@ -1,16 +1,16 @@
# Created By: Virgil Dupras # Created By: Virgil Dupras
# Created On: 2006/03/03 # Created On: 2006/03/03
# Copyright 2013 Hardcoded Software (http://www.hardcoded.net) # Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
# #
# This software is licensed under the "BSD" License as described in the "LICENSE" file, # This software is licensed under the "BSD" License as described in the "LICENSE" file,
# which should be included with this package. The terms are also available at # which should be included with this package. The terms are also available at
# http://www.hardcoded.net/licenses/bsd_license # http://www.hardcoded.net/licenses/bsd_license
import logging import logging
import re import re
import os.path as op import os.path as op
from jobprogress import job from hscommon.jobprogress import job
from hscommon.util import dedupe, rem_file_ext, get_file_ext from hscommon.util import dedupe, rem_file_ext, get_file_ext
from hscommon.trans import tr from hscommon.trans import tr
@@ -29,11 +29,10 @@ class ScanType:
Folders = 4 Folders = 4
Contents = 5 Contents = 5
ContentsAudio = 6 ContentsAudio = 6
#PE #PE
FuzzyBlock = 10 FuzzyBlock = 10
ExifTimestamp = 11 ExifTimestamp = 11
TriggerHappyMode = 12
SCANNABLE_TAGS = ['track', 'artist', 'album', 'title', 'genre', 'year'] SCANNABLE_TAGS = ['track', 'artist', 'album', 'title', 'genre', 'year']
@@ -73,7 +72,7 @@ class Scanner:
def __init__(self): def __init__(self):
self.ignore_list = IgnoreList() self.ignore_list = IgnoreList()
self.discarded_file_count = 0 self.discarded_file_count = 0
def _getmatches(self, files, j): def _getmatches(self, files, j):
if self.size_threshold: if self.size_threshold:
j = j.start_subjob([2, 8]) j = j.start_subjob([2, 8])
@@ -82,7 +81,9 @@ class Scanner:
files = [f for f in files if f.size >= self.size_threshold] files = [f for f in files if f.size >= self.size_threshold]
if self.scan_type in {ScanType.Contents, ScanType.ContentsAudio, ScanType.Folders}: if self.scan_type in {ScanType.Contents, ScanType.ContentsAudio, ScanType.Folders}:
sizeattr = 'audiosize' if self.scan_type == ScanType.ContentsAudio else 'size' sizeattr = 'audiosize' if self.scan_type == ScanType.ContentsAudio else 'size'
return engine.getmatches_by_contents(files, sizeattr, partial=self.scan_type==ScanType.ContentsAudio, j=j) return engine.getmatches_by_contents(
files, sizeattr, partial=self.scan_type == ScanType.ContentsAudio, j=j
)
else: else:
j = j.start_subjob([2, 8]) j = j.start_subjob([2, 8])
kw = {} kw = {}
@@ -95,17 +96,21 @@ class Scanner:
func = { func = {
ScanType.Filename: lambda f: engine.getwords(rem_file_ext(f.name)), ScanType.Filename: lambda f: engine.getwords(rem_file_ext(f.name)),
ScanType.Fields: lambda f: engine.getfields(rem_file_ext(f.name)), ScanType.Fields: lambda f: engine.getfields(rem_file_ext(f.name)),
ScanType.Tag: lambda f: [engine.getwords(str(getattr(f, attrname))) for attrname in SCANNABLE_TAGS if attrname in self.scanned_tags], ScanType.Tag: lambda f: [
engine.getwords(str(getattr(f, attrname)))
for attrname in SCANNABLE_TAGS
if attrname in self.scanned_tags
],
}[self.scan_type] }[self.scan_type]
for f in j.iter_with_progress(files, tr("Read metadata of %d/%d files")): for f in j.iter_with_progress(files, tr("Read metadata of %d/%d files")):
logging.debug("Reading metadata of {}".format(str(f.path))) logging.debug("Reading metadata of {}".format(str(f.path)))
f.words = func(f) f.words = func(f)
return engine.getmatches(files, j=j, **kw) return engine.getmatches(files, j=j, **kw)
@staticmethod @staticmethod
def _key_func(dupe): def _key_func(dupe):
return -dupe.size return -dupe.size
@staticmethod @staticmethod
def _tie_breaker(ref, dupe): def _tie_breaker(ref, dupe):
refname = rem_file_ext(ref.name).lower() refname = rem_file_ext(ref.name).lower()
@@ -119,7 +124,7 @@ class Scanner:
if is_same_with_digit(refname, dupename): if is_same_with_digit(refname, dupename):
return True return True
return len(dupe.path) > len(ref.path) return len(dupe.path) > len(ref.path)
def get_dupe_groups(self, files, j=job.nulljob): def get_dupe_groups(self, files, j=job.nulljob):
j = j.start_subjob([8, 2]) j = j.start_subjob([8, 2])
for f in (f for f in files if not hasattr(f, 'is_ref')): for f in (f for f in files if not hasattr(f, 'is_ref')):
@@ -153,8 +158,10 @@ class Scanner:
if self.ignore_list: if self.ignore_list:
j = j.start_subjob(2) j = j.start_subjob(2)
iter_matches = j.iter_with_progress(matches, tr("Processed %d/%d matches against the ignore list")) iter_matches = j.iter_with_progress(matches, tr("Processed %d/%d matches against the ignore list"))
matches = [m for m in iter_matches matches = [
if not self.ignore_list.AreIgnored(str(m.first.path), str(m.second.path))] m for m in iter_matches
if not self.ignore_list.AreIgnored(str(m.first.path), str(m.second.path))
]
logging.info('Grouping matches') logging.info('Grouping matches')
groups = engine.get_groups(matches, j) groups = engine.get_groups(matches, j)
matched_files = dedupe([m.first for m in matches] + [m.second for m in matches]) matched_files = dedupe([m.first for m in matches] + [m.second for m in matches])
@@ -178,11 +185,12 @@ class Scanner:
for g in groups: for g in groups:
g.prioritize(self._key_func, self._tie_breaker) g.prioritize(self._key_func, self._tie_breaker)
return groups return groups
match_similar_words = False match_similar_words = False
min_match_percentage = 80 min_match_percentage = 80
mix_file_kind = True mix_file_kind = True
scan_type = ScanType.Filename scan_type = ScanType.Filename
scanned_tags = {'artist', 'title'} scanned_tags = {'artist', 'title'}
size_threshold = 0 size_threshold = 0
word_weighting = False word_weighting = False

View File

@@ -1,9 +1,9 @@
# Created By: Virgil Dupras # Created By: Virgil Dupras
# Created On: 2007-06-23 # Created On: 2007-06-23
# Copyright 2013 Hardcoded Software (http://www.hardcoded.net) # Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
# #
# This software is licensed under the "BSD" License as described in the "LICENSE" file, # This software is licensed under the "BSD" License as described in the "LICENSE" file,
# which should be included with this package. The terms are also available at # which should be included with this package. The terms are also available at
# http://www.hardcoded.net/licenses/bsd_license # http://www.hardcoded.net/licenses/bsd_license
import os import os
@@ -15,7 +15,7 @@ from hscommon.path import Path
import hscommon.conflict import hscommon.conflict
import hscommon.util import hscommon.util
from hscommon.testutil import CallLogger, eq_, log_calls from hscommon.testutil import CallLogger, eq_, log_calls
from jobprogress.job import Job from hscommon.jobprogress.job import Job
from .base import DupeGuru, TestApp from .base import DupeGuru, TestApp
from .results_test import GetTestGroups from .results_test import GetTestGroups
@@ -36,7 +36,7 @@ class TestCaseDupeGuru:
assert call['filter_str'] is None assert call['filter_str'] is None
call = dgapp.results.apply_filter.calls[1] call = dgapp.results.apply_filter.calls[1]
eq_('foo', call['filter_str']) eq_('foo', call['filter_str'])
def test_apply_filter_escapes_regexp(self, monkeypatch): def test_apply_filter_escapes_regexp(self, monkeypatch):
dgapp = TestApp().app dgapp = TestApp().app
monkeypatch.setattr(dgapp.results, 'apply_filter', log_calls(dgapp.results.apply_filter)) monkeypatch.setattr(dgapp.results, 'apply_filter', log_calls(dgapp.results.apply_filter))
@@ -50,7 +50,7 @@ class TestCaseDupeGuru:
dgapp.apply_filter('(abc)') dgapp.apply_filter('(abc)')
call = dgapp.results.apply_filter.calls[5] call = dgapp.results.apply_filter.calls[5]
eq_('(abc)', call['filter_str']) eq_('(abc)', call['filter_str'])
def test_copy_or_move(self, tmpdir, monkeypatch): def test_copy_or_move(self, tmpdir, monkeypatch):
# The goal here is just to have a test for a previous blowup I had. I know my test coverage # The goal here is just to have a test for a previous blowup I had. I know my test coverage
# for this unit is pathetic. What's done is done. My approach now is to add tests for # for this unit is pathetic. What's done is done. My approach now is to add tests for
@@ -69,7 +69,7 @@ class TestCaseDupeGuru:
call = hscommon.conflict.smart_copy.calls[0] call = hscommon.conflict.smart_copy.calls[0]
eq_(call['dest_path'], op.join('some_destination', 'foo')) eq_(call['dest_path'], op.join('some_destination', 'foo'))
eq_(call['source_path'], f.path) eq_(call['source_path'], f.path)
def test_copy_or_move_clean_empty_dirs(self, tmpdir, monkeypatch): def test_copy_or_move_clean_empty_dirs(self, tmpdir, monkeypatch):
tmppath = Path(str(tmpdir)) tmppath = Path(str(tmpdir))
sourcepath = tmppath['source'] sourcepath = tmppath['source']
@@ -83,13 +83,13 @@ class TestCaseDupeGuru:
calls = app.clean_empty_dirs.calls calls = app.clean_empty_dirs.calls
eq_(1, len(calls)) eq_(1, len(calls))
eq_(sourcepath, calls[0]['path']) eq_(sourcepath, calls[0]['path'])
def test_Scan_with_objects_evaluating_to_false(self): def test_Scan_with_objects_evaluating_to_false(self):
class FakeFile(fs.File): class FakeFile(fs.File):
def __bool__(self): def __bool__(self):
return False return False
# At some point, any() was used in a wrong way that made Scan() wrongly return 1 # At some point, any() was used in a wrong way that made Scan() wrongly return 1
app = TestApp().app app = TestApp().app
f1, f2 = [FakeFile('foo') for i in range(2)] f1, f2 = [FakeFile('foo') for i in range(2)]
@@ -97,7 +97,7 @@ class TestCaseDupeGuru:
assert not (bool(f1) and bool(f2)) assert not (bool(f1) and bool(f2))
add_fake_files_to_directories(app.directories, [f1, f2]) add_fake_files_to_directories(app.directories, [f1, f2])
app.start_scanning() # no exception app.start_scanning() # no exception
@mark.skipif("not hasattr(os, 'link')") @mark.skipif("not hasattr(os, 'link')")
def test_ignore_hardlink_matches(self, tmpdir): def test_ignore_hardlink_matches(self, tmpdir):
# If the ignore_hardlink_matches option is set, don't match files hardlinking to the same # If the ignore_hardlink_matches option is set, don't match files hardlinking to the same
@@ -111,7 +111,7 @@ class TestCaseDupeGuru:
app.options['ignore_hardlink_matches'] = True app.options['ignore_hardlink_matches'] = True
app.start_scanning() app.start_scanning()
eq_(len(app.results.groups), 0) eq_(len(app.results.groups), 0)
def test_rename_when_nothing_is_selected(self): def test_rename_when_nothing_is_selected(self):
# Issue #140 # Issue #140
# It's possible that rename operation has its selected row swept off from under it, thus # It's possible that rename operation has its selected row swept off from under it, thus
@@ -127,11 +127,11 @@ class TestCaseDupeGuru_clean_empty_dirs:
# XXX This monkeypatch is temporary. will be fixed in a better monkeypatcher. # XXX This monkeypatch is temporary. will be fixed in a better monkeypatcher.
monkeypatch.setattr(app, 'delete_if_empty', hscommon.util.delete_if_empty) monkeypatch.setattr(app, 'delete_if_empty', hscommon.util.delete_if_empty)
self.app = TestApp().app self.app = TestApp().app
def test_option_off(self, do_setup): def test_option_off(self, do_setup):
self.app.clean_empty_dirs(Path('/foo/bar')) self.app.clean_empty_dirs(Path('/foo/bar'))
eq_(0, len(hscommon.util.delete_if_empty.calls)) eq_(0, len(hscommon.util.delete_if_empty.calls))
def test_option_on(self, do_setup): def test_option_on(self, do_setup):
self.app.options['clean_empty_dirs'] = True self.app.options['clean_empty_dirs'] = True
self.app.clean_empty_dirs(Path('/foo/bar')) self.app.clean_empty_dirs(Path('/foo/bar'))
@@ -139,13 +139,13 @@ class TestCaseDupeGuru_clean_empty_dirs:
eq_(1, len(calls)) eq_(1, len(calls))
eq_(Path('/foo/bar'), calls[0]['path']) eq_(Path('/foo/bar'), calls[0]['path'])
eq_(['.DS_Store'], calls[0]['files_to_delete']) eq_(['.DS_Store'], calls[0]['files_to_delete'])
def test_recurse_up(self, do_setup, monkeypatch): def test_recurse_up(self, do_setup, monkeypatch):
# delete_if_empty must be recursively called up in the path until it returns False # delete_if_empty must be recursively called up in the path until it returns False
@log_calls @log_calls
def mock_delete_if_empty(path, files_to_delete=[]): def mock_delete_if_empty(path, files_to_delete=[]):
return len(path) > 1 return len(path) > 1
monkeypatch.setattr(hscommon.util, 'delete_if_empty', mock_delete_if_empty) monkeypatch.setattr(hscommon.util, 'delete_if_empty', mock_delete_if_empty)
# XXX This monkeypatch is temporary. will be fixed in a better monkeypatcher. # XXX This monkeypatch is temporary. will be fixed in a better monkeypatcher.
monkeypatch.setattr(app, 'delete_if_empty', mock_delete_if_empty) monkeypatch.setattr(app, 'delete_if_empty', mock_delete_if_empty)
@@ -156,7 +156,7 @@ class TestCaseDupeGuru_clean_empty_dirs:
eq_(Path('not-empty/empty/empty'), calls[0]['path']) eq_(Path('not-empty/empty/empty'), calls[0]['path'])
eq_(Path('not-empty/empty'), calls[1]['path']) eq_(Path('not-empty/empty'), calls[1]['path'])
eq_(Path('not-empty'), calls[2]['path']) eq_(Path('not-empty'), calls[2]['path'])
class TestCaseDupeGuruWithResults: class TestCaseDupeGuruWithResults:
def pytest_funcarg__do_setup(self, request): def pytest_funcarg__do_setup(self, request):
@@ -173,7 +173,7 @@ class TestCaseDupeGuruWithResults:
tmppath['foo'].mkdir() tmppath['foo'].mkdir()
tmppath['bar'].mkdir() tmppath['bar'].mkdir()
self.app.directories.add_path(tmppath) self.app.directories.add_path(tmppath)
def test_GetObjects(self, do_setup): def test_GetObjects(self, do_setup):
objects = self.objects objects = self.objects
groups = self.groups groups = self.groups
@@ -186,7 +186,7 @@ class TestCaseDupeGuruWithResults:
r = self.rtable[4] r = self.rtable[4]
assert r._group is groups[1] assert r._group is groups[1]
assert r._dupe is objects[4] assert r._dupe is objects[4]
def test_GetObjects_after_sort(self, do_setup): def test_GetObjects_after_sort(self, do_setup):
objects = self.objects objects = self.objects
groups = self.groups[:] # we need an un-sorted reference groups = self.groups[:] # we need an un-sorted reference
@@ -194,14 +194,14 @@ class TestCaseDupeGuruWithResults:
r = self.rtable[1] r = self.rtable[1]
assert r._group is groups[1] assert r._group is groups[1]
assert r._dupe is objects[4] assert r._dupe is objects[4]
def test_selected_result_node_paths_after_deletion(self, do_setup): def test_selected_result_node_paths_after_deletion(self, do_setup):
# cases where the selected dupes aren't there are correctly handled # cases where the selected dupes aren't there are correctly handled
self.rtable.select([1, 2, 3]) self.rtable.select([1, 2, 3])
self.app.remove_selected() self.app.remove_selected()
# The first 2 dupes have been removed. The 3rd one is a ref. it stays there, in first pos. # The first 2 dupes have been removed. The 3rd one is a ref. it stays there, in first pos.
eq_(self.rtable.selected_indexes, [1]) # no exception eq_(self.rtable.selected_indexes, [1]) # no exception
def test_selectResultNodePaths(self, do_setup): def test_selectResultNodePaths(self, do_setup):
app = self.app app = self.app
objects = self.objects objects = self.objects
@@ -209,7 +209,7 @@ class TestCaseDupeGuruWithResults:
eq_(len(app.selected_dupes), 2) eq_(len(app.selected_dupes), 2)
assert app.selected_dupes[0] is objects[1] assert app.selected_dupes[0] is objects[1]
assert app.selected_dupes[1] is objects[2] assert app.selected_dupes[1] is objects[2]
def test_selectResultNodePaths_with_ref(self, do_setup): def test_selectResultNodePaths_with_ref(self, do_setup):
app = self.app app = self.app
objects = self.objects objects = self.objects
@@ -218,26 +218,26 @@ class TestCaseDupeGuruWithResults:
assert app.selected_dupes[0] is objects[1] assert app.selected_dupes[0] is objects[1]
assert app.selected_dupes[1] is objects[2] assert app.selected_dupes[1] is objects[2]
assert app.selected_dupes[2] is self.groups[1].ref assert app.selected_dupes[2] is self.groups[1].ref
def test_selectResultNodePaths_after_sort(self, do_setup): def test_selectResultNodePaths_after_sort(self, do_setup):
app = self.app app = self.app
objects = self.objects objects = self.objects
groups = self.groups[:] #To keep the old order in memory groups = self.groups[:] #To keep the old order in memory
self.rtable.sort('name', False) #0 self.rtable.sort('name', False) #0
#Now, the group order is supposed to be reversed #Now, the group order is supposed to be reversed
self.rtable.select([1, 2, 3]) self.rtable.select([1, 2, 3])
eq_(len(app.selected_dupes), 3) eq_(len(app.selected_dupes), 3)
assert app.selected_dupes[0] is objects[4] assert app.selected_dupes[0] is objects[4]
assert app.selected_dupes[1] is groups[0].ref assert app.selected_dupes[1] is groups[0].ref
assert app.selected_dupes[2] is objects[1] assert app.selected_dupes[2] is objects[1]
def test_selected_powermarker_node_paths(self, do_setup): def test_selected_powermarker_node_paths(self, do_setup):
# app.selected_dupes is correctly converted into paths # app.selected_dupes is correctly converted into paths
self.rtable.power_marker = True self.rtable.power_marker = True
self.rtable.select([0, 1, 2]) self.rtable.select([0, 1, 2])
self.rtable.power_marker = False self.rtable.power_marker = False
eq_(self.rtable.selected_indexes, [1, 2, 4]) eq_(self.rtable.selected_indexes, [1, 2, 4])
def test_selected_powermarker_node_paths_after_deletion(self, do_setup): def test_selected_powermarker_node_paths_after_deletion(self, do_setup):
# cases where the selected dupes aren't there are correctly handled # cases where the selected dupes aren't there are correctly handled
app = self.app app = self.app
@@ -245,7 +245,7 @@ class TestCaseDupeGuruWithResults:
self.rtable.select([0, 1, 2]) self.rtable.select([0, 1, 2])
app.remove_selected() app.remove_selected()
eq_(self.rtable.selected_indexes, []) # no exception eq_(self.rtable.selected_indexes, []) # no exception
def test_selectPowerMarkerRows_after_sort(self, do_setup): def test_selectPowerMarkerRows_after_sort(self, do_setup):
app = self.app app = self.app
objects = self.objects objects = self.objects
@@ -256,7 +256,7 @@ class TestCaseDupeGuruWithResults:
assert app.selected_dupes[0] is objects[4] assert app.selected_dupes[0] is objects[4]
assert app.selected_dupes[1] is objects[2] assert app.selected_dupes[1] is objects[2]
assert app.selected_dupes[2] is objects[1] assert app.selected_dupes[2] is objects[1]
def test_toggle_selected_mark_state(self, do_setup): def test_toggle_selected_mark_state(self, do_setup):
app = self.app app = self.app
objects = self.objects objects = self.objects
@@ -270,7 +270,7 @@ class TestCaseDupeGuruWithResults:
assert not app.results.is_marked(objects[2]) assert not app.results.is_marked(objects[2])
assert not app.results.is_marked(objects[3]) assert not app.results.is_marked(objects[3])
assert app.results.is_marked(objects[4]) assert app.results.is_marked(objects[4])
def test_toggle_selected_mark_state_with_different_selected_state(self, do_setup): def test_toggle_selected_mark_state_with_different_selected_state(self, do_setup):
# When marking selected dupes with a heterogenous selection, mark all selected dupes. When # When marking selected dupes with a heterogenous selection, mark all selected dupes. When
# it's homogenous, simply toggle. # it's homogenous, simply toggle.
@@ -285,7 +285,7 @@ class TestCaseDupeGuruWithResults:
eq_(app.results.mark_count, 2) eq_(app.results.mark_count, 2)
app.toggle_selected_mark_state() app.toggle_selected_mark_state()
eq_(app.results.mark_count, 0) eq_(app.results.mark_count, 0)
def test_refreshDetailsWithSelected(self, do_setup): def test_refreshDetailsWithSelected(self, do_setup):
self.rtable.select([1, 4]) self.rtable.select([1, 4])
eq_(self.dpanel.row(0), ('Filename', 'bar bleh', 'foo bar')) eq_(self.dpanel.row(0), ('Filename', 'bar bleh', 'foo bar'))
@@ -293,7 +293,7 @@ class TestCaseDupeGuruWithResults:
self.rtable.select([]) self.rtable.select([])
eq_(self.dpanel.row(0), ('Filename', '---', '---')) eq_(self.dpanel.row(0), ('Filename', '---', '---'))
self.dpanel.view.check_gui_calls(['refresh']) self.dpanel.view.check_gui_calls(['refresh'])
def test_makeSelectedReference(self, do_setup): def test_makeSelectedReference(self, do_setup):
app = self.app app = self.app
objects = self.objects objects = self.objects
@@ -302,7 +302,7 @@ class TestCaseDupeGuruWithResults:
app.make_selected_reference() app.make_selected_reference()
assert groups[0].ref is objects[1] assert groups[0].ref is objects[1]
assert groups[1].ref is objects[4] assert groups[1].ref is objects[4]
def test_makeSelectedReference_by_selecting_two_dupes_in_the_same_group(self, do_setup): def test_makeSelectedReference_by_selecting_two_dupes_in_the_same_group(self, do_setup):
app = self.app app = self.app
objects = self.objects objects = self.objects
@@ -312,7 +312,7 @@ class TestCaseDupeGuruWithResults:
app.make_selected_reference() app.make_selected_reference()
assert groups[0].ref is objects[1] assert groups[0].ref is objects[1]
assert groups[1].ref is objects[4] assert groups[1].ref is objects[4]
def test_removeSelected(self, do_setup): def test_removeSelected(self, do_setup):
app = self.app app = self.app
self.rtable.select([1, 4]) self.rtable.select([1, 4])
@@ -320,7 +320,7 @@ class TestCaseDupeGuruWithResults:
eq_(len(app.results.dupes), 1) # the first path is now selected eq_(len(app.results.dupes), 1) # the first path is now selected
app.remove_selected() app.remove_selected()
eq_(len(app.results.dupes), 0) eq_(len(app.results.dupes), 0)
def test_addDirectory_simple(self, do_setup): def test_addDirectory_simple(self, do_setup):
# There's already a directory in self.app, so adding another once makes 2 of em # There's already a directory in self.app, so adding another once makes 2 of em
app = self.app app = self.app
@@ -328,7 +328,7 @@ class TestCaseDupeGuruWithResults:
otherpath = Path(op.dirname(__file__)) otherpath = Path(op.dirname(__file__))
app.add_directory(otherpath) app.add_directory(otherpath)
eq_(len(app.directories), 2) eq_(len(app.directories), 2)
def test_addDirectory_already_there(self, do_setup): def test_addDirectory_already_there(self, do_setup):
app = self.app app = self.app
otherpath = Path(op.dirname(__file__)) otherpath = Path(op.dirname(__file__))
@@ -336,13 +336,13 @@ class TestCaseDupeGuruWithResults:
app.add_directory(otherpath) app.add_directory(otherpath)
eq_(len(app.view.messages), 1) eq_(len(app.view.messages), 1)
assert "already" in app.view.messages[0] assert "already" in app.view.messages[0]
def test_addDirectory_does_not_exist(self, do_setup): def test_addDirectory_does_not_exist(self, do_setup):
app = self.app app = self.app
app.add_directory('/does_not_exist') app.add_directory('/does_not_exist')
eq_(len(app.view.messages), 1) eq_(len(app.view.messages), 1)
assert "exist" in app.view.messages[0] assert "exist" in app.view.messages[0]
def test_ignore(self, do_setup): def test_ignore(self, do_setup):
app = self.app app = self.app
self.rtable.select([4]) #The dupe of the second, 2 sized group self.rtable.select([4]) #The dupe of the second, 2 sized group
@@ -352,7 +352,7 @@ class TestCaseDupeGuruWithResults:
app.add_selected_to_ignore_list() app.add_selected_to_ignore_list()
#BOTH the ref and the other dupe should have been added #BOTH the ref and the other dupe should have been added
eq_(len(app.scanner.ignore_list), 3) eq_(len(app.scanner.ignore_list), 3)
def test_purgeIgnoreList(self, do_setup, tmpdir): def test_purgeIgnoreList(self, do_setup, tmpdir):
app = self.app app = self.app
p1 = str(tmpdir.join('file1')) p1 = str(tmpdir.join('file1'))
@@ -367,19 +367,19 @@ class TestCaseDupeGuruWithResults:
eq_(1,len(app.scanner.ignore_list)) eq_(1,len(app.scanner.ignore_list))
assert app.scanner.ignore_list.AreIgnored(p1,p2) assert app.scanner.ignore_list.AreIgnored(p1,p2)
assert not app.scanner.ignore_list.AreIgnored(dne,p1) assert not app.scanner.ignore_list.AreIgnored(dne,p1)
def test_only_unicode_is_added_to_ignore_list(self, do_setup): def test_only_unicode_is_added_to_ignore_list(self, do_setup):
def FakeIgnore(first,second): def FakeIgnore(first,second):
if not isinstance(first,str): if not isinstance(first,str):
self.fail() self.fail()
if not isinstance(second,str): if not isinstance(second,str):
self.fail() self.fail()
app = self.app app = self.app
app.scanner.ignore_list.Ignore = FakeIgnore app.scanner.ignore_list.Ignore = FakeIgnore
self.rtable.select([4]) self.rtable.select([4])
app.add_selected_to_ignore_list() app.add_selected_to_ignore_list()
def test_cancel_scan_with_previous_results(self, do_setup): def test_cancel_scan_with_previous_results(self, do_setup):
# When doing a scan with results being present prior to the scan, correctly invalidate the # When doing a scan with results being present prior to the scan, correctly invalidate the
# results table. # results table.
@@ -388,7 +388,7 @@ class TestCaseDupeGuruWithResults:
add_fake_files_to_directories(app.directories, self.objects) # We want the scan to at least start add_fake_files_to_directories(app.directories, self.objects) # We want the scan to at least start
app.start_scanning() # will be cancelled immediately app.start_scanning() # will be cancelled immediately
eq_(len(self.rtable), 0) eq_(len(self.rtable), 0)
def test_selected_dupes_after_removal(self, do_setup): def test_selected_dupes_after_removal(self, do_setup):
# Purge the app's `selected_dupes` attribute when removing dupes, or else it might cause a # Purge the app's `selected_dupes` attribute when removing dupes, or else it might cause a
# crash later with None refs. # crash later with None refs.
@@ -398,7 +398,7 @@ class TestCaseDupeGuruWithResults:
app.remove_marked() app.remove_marked()
eq_(len(self.rtable), 0) eq_(len(self.rtable), 0)
eq_(app.selected_dupes, []) eq_(app.selected_dupes, [])
def test_dont_crash_on_delta_powermarker_dupecount_sort(self, do_setup): def test_dont_crash_on_delta_powermarker_dupecount_sort(self, do_setup):
# Don't crash when sorting by dupe count or percentage while delta+powermarker are enabled. # Don't crash when sorting by dupe count or percentage while delta+powermarker are enabled.
# Ref #238 # Ref #238
@@ -410,7 +410,7 @@ class TestCaseDupeGuruWithResults:
# don't crash # don't crash
self.rtable.sort('percentage', False) self.rtable.sort('percentage', False)
# don't crash # don't crash
class TestCaseDupeGuru_renameSelected: class TestCaseDupeGuru_renameSelected:
def pytest_funcarg__do_setup(self, request): def pytest_funcarg__do_setup(self, request):
@@ -437,7 +437,7 @@ class TestCaseDupeGuru_renameSelected:
self.groups = groups self.groups = groups
self.p = p self.p = p
self.files = files self.files = files
def test_simple(self, do_setup): def test_simple(self, do_setup):
app = self.app app = self.app
g = self.groups[0] g = self.groups[0]
@@ -447,7 +447,7 @@ class TestCaseDupeGuru_renameSelected:
assert 'renamed' in names assert 'renamed' in names
assert 'foo bar 2' not in names assert 'foo bar 2' not in names
eq_(g.dupes[0].name, 'renamed') eq_(g.dupes[0].name, 'renamed')
def test_none_selected(self, do_setup, monkeypatch): def test_none_selected(self, do_setup, monkeypatch):
app = self.app app = self.app
g = self.groups[0] g = self.groups[0]
@@ -460,7 +460,7 @@ class TestCaseDupeGuru_renameSelected:
assert 'renamed' not in names assert 'renamed' not in names
assert 'foo bar 2' in names assert 'foo bar 2' in names
eq_(g.dupes[0].name, 'foo bar 2') eq_(g.dupes[0].name, 'foo bar 2')
def test_name_already_exists(self, do_setup, monkeypatch): def test_name_already_exists(self, do_setup, monkeypatch):
app = self.app app = self.app
g = self.groups[0] g = self.groups[0]
@@ -473,7 +473,7 @@ class TestCaseDupeGuru_renameSelected:
assert 'foo bar 1' in names assert 'foo bar 1' in names
assert 'foo bar 2' in names assert 'foo bar 2' in names
eq_(g.dupes[0].name, 'foo bar 2') eq_(g.dupes[0].name, 'foo bar 2')
class TestAppWithDirectoriesInTree: class TestAppWithDirectoriesInTree:
def pytest_funcarg__do_setup(self, request): def pytest_funcarg__do_setup(self, request):
@@ -487,7 +487,7 @@ class TestAppWithDirectoriesInTree:
self.dtree = app.dtree self.dtree = app.dtree
self.dtree.add_directory(p) self.dtree.add_directory(p)
self.dtree.view.clear_calls() self.dtree.view.clear_calls()
def test_set_root_as_ref_makes_subfolders_ref_as_well(self, do_setup): def test_set_root_as_ref_makes_subfolders_ref_as_well(self, do_setup):
# Setting a node state to something also affect subnodes. These subnodes must be correctly # Setting a node state to something also affect subnodes. These subnodes must be correctly
# refreshed. # refreshed.
@@ -500,4 +500,4 @@ class TestAppWithDirectoriesInTree:
subnode = node[0] subnode = node[0]
eq_(subnode.state, 1) eq_(subnode.state, 1)
self.dtree.view.check_gui_calls(['refresh_states']) self.dtree.view.check_gui_calls(['refresh_states'])

View File

@@ -1,16 +1,16 @@
# Created By: Virgil Dupras # Created By: Virgil Dupras
# Created On: 2011/09/07 # Created On: 2011/09/07
# Copyright 2013 Hardcoded Software (http://www.hardcoded.net) # Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
# #
# This software is licensed under the "BSD" License as described in the "LICENSE" file, # This software is licensed under the "BSD" License as described in the "LICENSE" file,
# which should be included with this package. The terms are also available at # which should be included with this package. The terms are also available at
# http://www.hardcoded.net/licenses/bsd_license # http://www.hardcoded.net/licenses/bsd_license
from hscommon.testutil import TestApp as TestAppBase, eq_, with_app from hscommon.testutil import TestApp as TestAppBase, eq_, with_app
from hscommon.path import Path from hscommon.path import Path
from hscommon.util import get_file_ext, format_size from hscommon.util import get_file_ext, format_size
from hscommon.gui.column import Column from hscommon.gui.column import Column
from jobprogress.job import nulljob, JobCancelled from hscommon.jobprogress.job import nulljob, JobCancelled
from .. import engine from .. import engine
from .. import prioritize from .. import prioritize
@@ -23,28 +23,28 @@ from ..gui.prioritize_dialog import PrioritizeDialog
class DupeGuruView: class DupeGuruView:
JOB = nulljob JOB = nulljob
def __init__(self): def __init__(self):
self.messages = [] self.messages = []
def start_job(self, jobid, func, args=()): def start_job(self, jobid, func, args=()):
try: try:
func(self.JOB, *args) func(self.JOB, *args)
except JobCancelled: except JobCancelled:
return return
def get_default(self, key_name): def get_default(self, key_name):
return None return None
def set_default(self, key_name, value): def set_default(self, key_name, value):
pass pass
def show_message(self, msg): def show_message(self, msg):
self.messages.append(msg) self.messages.append(msg)
def ask_yes_no(self, prompt): def ask_yes_no(self, prompt):
return True # always answer yes return True # always answer yes
class ResultTable(ResultTableBase): class ResultTable(ResultTableBase):
COLUMNS = [ COLUMNS = [
@@ -55,21 +55,21 @@ class ResultTable(ResultTableBase):
Column('extension', 'Kind'), Column('extension', 'Kind'),
] ]
DELTA_COLUMNS = {'size', } DELTA_COLUMNS = {'size', }
class DupeGuru(DupeGuruBase): class DupeGuru(DupeGuruBase):
NAME = 'dupeGuru' NAME = 'dupeGuru'
METADATA_TO_READ = ['size'] METADATA_TO_READ = ['size']
def __init__(self): def __init__(self):
DupeGuruBase.__init__(self, DupeGuruView()) DupeGuruBase.__init__(self, DupeGuruView())
self.appdata = '/tmp' self.appdata = '/tmp'
def _prioritization_categories(self): def _prioritization_categories(self):
return prioritize.all_categories() return prioritize.all_categories()
def _create_result_table(self): def _create_result_table(self):
return ResultTable(self) return ResultTable(self)
class NamedObject: class NamedObject:
def __init__(self, name="foobar", with_words=False, size=1, folder=None): def __init__(self, name="foobar", with_words=False, size=1, folder=None):
@@ -83,10 +83,10 @@ class NamedObject:
if with_words: if with_words:
self.words = getwords(name) self.words = getwords(name)
self.is_ref = False self.is_ref = False
def __bool__(self): def __bool__(self):
return False #Make sure that operations are made correctly when the bool value of files is false. return False #Make sure that operations are made correctly when the bool value of files is false.
def get_display_info(self, group, delta): def get_display_info(self, group, delta):
size = self.size size = self.size
m = group.get_match_of(self) m = group.get_match_of(self)
@@ -99,19 +99,19 @@ class NamedObject:
'size': format_size(size, 0, 1, False), 'size': format_size(size, 0, 1, False),
'extension': self.extension if hasattr(self, 'extension') else '---', 'extension': self.extension if hasattr(self, 'extension') else '---',
} }
@property @property
def path(self): def path(self):
return self._folder[self.name] return self._folder[self.name]
@property @property
def folder_path(self): def folder_path(self):
return self.path.parent() return self.path.parent()
@property @property
def extension(self): def extension(self):
return get_file_ext(self.name) return get_file_ext(self.name)
# Returns a group set that looks like that: # Returns a group set that looks like that:
# "foo bar" (1) # "foo bar" (1)
# "bar bleh" (1024) # "bar bleh" (1024)
@@ -135,7 +135,7 @@ class TestApp(TestAppBase):
if hasattr(gui, 'columns'): # tables if hasattr(gui, 'columns'): # tables
gui.columns.view = self.make_logger() gui.columns.view = self.make_logger()
return gui return gui
TestAppBase.__init__(self) TestAppBase.__init__(self)
make_gui = self.make_gui make_gui = self.make_gui
self.app = DupeGuru() self.app = DupeGuru()
@@ -153,14 +153,14 @@ class TestApp(TestAppBase):
link_gui(self.app.progress_window) link_gui(self.app.progress_window)
link_gui(self.app.progress_window.jobdesc_textfield) link_gui(self.app.progress_window.jobdesc_textfield)
link_gui(self.app.progress_window.progressdesc_textfield) link_gui(self.app.progress_window.progressdesc_textfield)
#--- Helpers #--- Helpers
def select_pri_criterion(self, name): def select_pri_criterion(self, name):
# Select a main prioritize criterion by name instead of by index. Makes tests more # Select a main prioritize criterion by name instead of by index. Makes tests more
# maintainable. # maintainable.
index = self.pdialog.category_list.index(name) index = self.pdialog.category_list.index(name)
self.pdialog.category_list.select(index) self.pdialog.category_list.select(index)
def add_pri_criterion(self, name, index): def add_pri_criterion(self, name, index):
self.select_pri_criterion(name) self.select_pri_criterion(name)
self.pdialog.criteria_list.select([index]) self.pdialog.criteria_list.select([index])

View File

@@ -1,6 +1,6 @@
# Created By: Virgil Dupras # Created By: Virgil Dupras
# Created On: 2006/02/27 # Created On: 2006/02/27
# Copyright 2013 Hardcoded Software (http://www.hardcoded.net) # Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
# #
# This software is licensed under the "BSD" License as described in the "LICENSE" file, # This software is licensed under the "BSD" License as described in the "LICENSE" file,
# which should be included with this package. The terms are also available at # which should be included with this package. The terms are also available at

View File

@@ -1,14 +1,14 @@
# Created By: Virgil Dupras # Created By: Virgil Dupras
# Created On: 2006/01/29 # Created On: 2006/01/29
# Copyright 2013 Hardcoded Software (http://www.hardcoded.net) # Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
# #
# This software is licensed under the "BSD" License as described in the "LICENSE" file, # This software is licensed under the "BSD" License as described in the "LICENSE" file,
# which should be included with this package. The terms are also available at # which should be included with this package. The terms are also available at
# http://www.hardcoded.net/licenses/bsd_license # http://www.hardcoded.net/licenses/bsd_license
import sys import sys
from jobprogress import job from hscommon.jobprogress import job
from hscommon.util import first from hscommon.util import first
from hscommon.testutil import eq_, log_calls from hscommon.testutil import eq_, log_calls
@@ -48,119 +48,119 @@ class TestCasegetwords:
def test_spaces(self): def test_spaces(self):
eq_(['a', 'b', 'c', 'd'], getwords("a b c d")) eq_(['a', 'b', 'c', 'd'], getwords("a b c d"))
eq_(['a', 'b', 'c', 'd'], getwords(" a b c d ")) eq_(['a', 'b', 'c', 'd'], getwords(" a b c d "))
def test_splitter_chars(self): def test_splitter_chars(self):
eq_( eq_(
[chr(i) for i in range(ord('a'),ord('z')+1)], [chr(i) for i in range(ord('a'),ord('z')+1)],
getwords("a-b_c&d+e(f)g;h\\i[j]k{l}m:n.o,p<q>r/s?t~u!v@w#x$y*z") getwords("a-b_c&d+e(f)g;h\\i[j]k{l}m:n.o,p<q>r/s?t~u!v@w#x$y*z")
) )
def test_joiner_chars(self): def test_joiner_chars(self):
eq_(["aec"], getwords("a'e\u0301c")) eq_(["aec"], getwords("a'e\u0301c"))
def test_empty(self): def test_empty(self):
eq_([], getwords('')) eq_([], getwords(''))
def test_returns_lowercase(self): def test_returns_lowercase(self):
eq_(['foo', 'bar'], getwords('FOO BAR')) eq_(['foo', 'bar'], getwords('FOO BAR'))
def test_decompose_unicode(self): def test_decompose_unicode(self):
eq_(getwords('foo\xe9bar'), ['fooebar']) eq_(getwords('foo\xe9bar'), ['fooebar'])
class TestCasegetfields: class TestCasegetfields:
def test_simple(self): def test_simple(self):
eq_([['a', 'b'], ['c', 'd', 'e']], getfields('a b - c d e')) eq_([['a', 'b'], ['c', 'd', 'e']], getfields('a b - c d e'))
def test_empty(self): def test_empty(self):
eq_([], getfields('')) eq_([], getfields(''))
def test_cleans_empty_fields(self): def test_cleans_empty_fields(self):
expected = [['a', 'bc', 'def']] expected = [['a', 'bc', 'def']]
actual = getfields(' - a bc def') actual = getfields(' - a bc def')
eq_(expected, actual) eq_(expected, actual)
expected = [['bc', 'def']] expected = [['bc', 'def']]
class TestCaseunpack_fields: class TestCaseunpack_fields:
def test_with_fields(self): def test_with_fields(self):
expected = ['a', 'b', 'c', 'd', 'e', 'f'] expected = ['a', 'b', 'c', 'd', 'e', 'f']
actual = unpack_fields([['a'], ['b', 'c'], ['d', 'e', 'f']]) actual = unpack_fields([['a'], ['b', 'c'], ['d', 'e', 'f']])
eq_(expected, actual) eq_(expected, actual)
def test_without_fields(self): def test_without_fields(self):
expected = ['a', 'b', 'c', 'd', 'e', 'f'] expected = ['a', 'b', 'c', 'd', 'e', 'f']
actual = unpack_fields(['a', 'b', 'c', 'd', 'e', 'f']) actual = unpack_fields(['a', 'b', 'c', 'd', 'e', 'f'])
eq_(expected, actual) eq_(expected, actual)
def test_empty(self): def test_empty(self):
eq_([], unpack_fields([])) eq_([], unpack_fields([]))
class TestCaseWordCompare: class TestCaseWordCompare:
def test_list(self): def test_list(self):
eq_(100, compare(['a', 'b', 'c', 'd'],['a', 'b', 'c', 'd'])) eq_(100, compare(['a', 'b', 'c', 'd'],['a', 'b', 'c', 'd']))
eq_(86, compare(['a', 'b', 'c', 'd'],['a', 'b', 'c'])) eq_(86, compare(['a', 'b', 'c', 'd'],['a', 'b', 'c']))
def test_unordered(self): def test_unordered(self):
#Sometimes, users don't want fuzzy matching too much When they set the slider #Sometimes, users don't want fuzzy matching too much When they set the slider
#to 100, they don't expect a filename with the same words, but not the same order, to match. #to 100, they don't expect a filename with the same words, but not the same order, to match.
#Thus, we want to return 99 in that case. #Thus, we want to return 99 in that case.
eq_(99, compare(['a', 'b', 'c', 'd'], ['d', 'b', 'c', 'a'])) eq_(99, compare(['a', 'b', 'c', 'd'], ['d', 'b', 'c', 'a']))
def test_word_occurs_twice(self): def test_word_occurs_twice(self):
#if a word occurs twice in first, but once in second, we want the word to be only counted once #if a word occurs twice in first, but once in second, we want the word to be only counted once
eq_(89, compare(['a', 'b', 'c', 'd', 'a'], ['d', 'b', 'c', 'a'])) eq_(89, compare(['a', 'b', 'c', 'd', 'a'], ['d', 'b', 'c', 'a']))
def test_uses_copy_of_lists(self): def test_uses_copy_of_lists(self):
first = ['foo', 'bar'] first = ['foo', 'bar']
second = ['bar', 'bleh'] second = ['bar', 'bleh']
compare(first, second) compare(first, second)
eq_(['foo', 'bar'], first) eq_(['foo', 'bar'], first)
eq_(['bar', 'bleh'], second) eq_(['bar', 'bleh'], second)
def test_word_weight(self): def test_word_weight(self):
eq_(int((6.0 / 13.0) * 100), compare(['foo', 'bar'], ['bar', 'bleh'], (WEIGHT_WORDS, ))) eq_(int((6.0 / 13.0) * 100), compare(['foo', 'bar'], ['bar', 'bleh'], (WEIGHT_WORDS, )))
def test_similar_words(self): def test_similar_words(self):
eq_(100, compare(['the', 'white', 'stripes'],['the', 'whites', 'stripe'], (MATCH_SIMILAR_WORDS, ))) eq_(100, compare(['the', 'white', 'stripes'],['the', 'whites', 'stripe'], (MATCH_SIMILAR_WORDS, )))
def test_empty(self): def test_empty(self):
eq_(0, compare([], [])) eq_(0, compare([], []))
def test_with_fields(self): def test_with_fields(self):
eq_(67, compare([['a', 'b'], ['c', 'd', 'e']], [['a', 'b'], ['c', 'd', 'f']])) eq_(67, compare([['a', 'b'], ['c', 'd', 'e']], [['a', 'b'], ['c', 'd', 'f']]))
def test_propagate_flags_with_fields(self, monkeypatch): def test_propagate_flags_with_fields(self, monkeypatch):
def mock_compare(first, second, flags): def mock_compare(first, second, flags):
eq_((0, 1, 2, 3, 5), flags) eq_((0, 1, 2, 3, 5), flags)
monkeypatch.setattr(engine, 'compare_fields', mock_compare) monkeypatch.setattr(engine, 'compare_fields', mock_compare)
compare([['a']], [['a']], (0, 1, 2, 3, 5)) compare([['a']], [['a']], (0, 1, 2, 3, 5))
class TestCaseWordCompareWithFields: class TestCaseWordCompareWithFields:
def test_simple(self): def test_simple(self):
eq_(67, compare_fields([['a', 'b'], ['c', 'd', 'e']], [['a', 'b'], ['c', 'd', 'f']])) eq_(67, compare_fields([['a', 'b'], ['c', 'd', 'e']], [['a', 'b'], ['c', 'd', 'f']]))
def test_empty(self): def test_empty(self):
eq_(0, compare_fields([], [])) eq_(0, compare_fields([], []))
def test_different_length(self): def test_different_length(self):
eq_(0, compare_fields([['a'], ['b']], [['a'], ['b'], ['c']])) eq_(0, compare_fields([['a'], ['b']], [['a'], ['b'], ['c']]))
def test_propagates_flags(self, monkeypatch): def test_propagates_flags(self, monkeypatch):
def mock_compare(first, second, flags): def mock_compare(first, second, flags):
eq_((0, 1, 2, 3, 5), flags) eq_((0, 1, 2, 3, 5), flags)
monkeypatch.setattr(engine, 'compare_fields', mock_compare) monkeypatch.setattr(engine, 'compare_fields', mock_compare)
compare_fields([['a']], [['a']],(0, 1, 2, 3, 5)) compare_fields([['a']], [['a']],(0, 1, 2, 3, 5))
def test_order(self): def test_order(self):
first = [['a', 'b'], ['c', 'd', 'e']] first = [['a', 'b'], ['c', 'd', 'e']]
second = [['c', 'd', 'f'], ['a', 'b']] second = [['c', 'd', 'f'], ['a', 'b']]
eq_(0, compare_fields(first, second)) eq_(0, compare_fields(first, second))
def test_no_order(self): def test_no_order(self):
first = [['a','b'],['c','d','e']] first = [['a','b'],['c','d','e']]
second = [['c','d','f'],['a','b']] second = [['c','d','f'],['a','b']]
@@ -168,10 +168,10 @@ class TestCaseWordCompareWithFields:
first = [['a','b'],['a','b']] #a field can only be matched once. first = [['a','b'],['a','b']] #a field can only be matched once.
second = [['c','d','f'],['a','b']] second = [['c','d','f'],['a','b']]
eq_(0, compare_fields(first, second, (NO_FIELD_ORDER, ))) eq_(0, compare_fields(first, second, (NO_FIELD_ORDER, )))
first = [['a','b'],['a','b','c']] first = [['a','b'],['a','b','c']]
second = [['c','d','f'],['a','b']] second = [['c','d','f'],['a','b']]
eq_(33, compare_fields(first, second, (NO_FIELD_ORDER, ))) eq_(33, compare_fields(first, second, (NO_FIELD_ORDER, )))
def test_compare_fields_without_order_doesnt_alter_fields(self): def test_compare_fields_without_order_doesnt_alter_fields(self):
#The NO_ORDER comp type altered the fields! #The NO_ORDER comp type altered the fields!
first = [['a','b'],['c','d','e']] first = [['a','b'],['c','d','e']]
@@ -179,7 +179,7 @@ class TestCaseWordCompareWithFields:
eq_(67, compare_fields(first, second, (NO_FIELD_ORDER, ))) eq_(67, compare_fields(first, second, (NO_FIELD_ORDER, )))
eq_([['a','b'],['c','d','e']],first) eq_([['a','b'],['c','d','e']],first)
eq_([['c','d','f'],['a','b']],second) eq_([['c','d','f'],['a','b']],second)
class TestCasebuild_word_dict: class TestCasebuild_word_dict:
def test_with_standard_words(self): def test_with_standard_words(self):
@@ -199,30 +199,30 @@ class TestCasebuild_word_dict:
assert l[2] in d['baz'] assert l[2] in d['baz']
eq_(1,len(d['bleh'])) eq_(1,len(d['bleh']))
assert l[2] in d['bleh'] assert l[2] in d['bleh']
def test_unpack_fields(self): def test_unpack_fields(self):
o = NamedObject('') o = NamedObject('')
o.words = [['foo','bar'],['baz']] o.words = [['foo','bar'],['baz']]
d = build_word_dict([o]) d = build_word_dict([o])
eq_(3,len(d)) eq_(3,len(d))
eq_(1,len(d['foo'])) eq_(1,len(d['foo']))
def test_words_are_unaltered(self): def test_words_are_unaltered(self):
o = NamedObject('') o = NamedObject('')
o.words = [['foo','bar'],['baz']] o.words = [['foo','bar'],['baz']]
build_word_dict([o]) build_word_dict([o])
eq_([['foo','bar'],['baz']],o.words) eq_([['foo','bar'],['baz']],o.words)
def test_object_instances_can_only_be_once_in_words_object_list(self): def test_object_instances_can_only_be_once_in_words_object_list(self):
o = NamedObject('foo foo',True) o = NamedObject('foo foo',True)
d = build_word_dict([o]) d = build_word_dict([o])
eq_(1,len(d['foo'])) eq_(1,len(d['foo']))
def test_job(self): def test_job(self):
def do_progress(p,d=''): def do_progress(p,d=''):
self.log.append(p) self.log.append(p)
return True return True
j = job.Job(1,do_progress) j = job.Job(1,do_progress)
self.log = [] self.log = []
s = "foo bar" s = "foo bar"
@@ -230,7 +230,7 @@ class TestCasebuild_word_dict:
# We don't have intermediate log because iter_with_progress is called with every > 1 # We don't have intermediate log because iter_with_progress is called with every > 1
eq_(0,self.log[0]) eq_(0,self.log[0])
eq_(100,self.log[1]) eq_(100,self.log[1])
class TestCasemerge_similar_words: class TestCasemerge_similar_words:
def test_some_similar_words(self): def test_some_similar_words(self):
@@ -242,8 +242,8 @@ class TestCasemerge_similar_words:
merge_similar_words(d) merge_similar_words(d)
eq_(1,len(d)) eq_(1,len(d))
eq_(3,len(d['foobar'])) eq_(3,len(d['foobar']))
class TestCasereduce_common_words: class TestCasereduce_common_words:
def test_typical(self): def test_typical(self):
@@ -254,7 +254,7 @@ class TestCasereduce_common_words:
reduce_common_words(d, 50) reduce_common_words(d, 50)
assert 'foo' not in d assert 'foo' not in d
eq_(49,len(d['bar'])) eq_(49,len(d['bar']))
def test_dont_remove_objects_with_only_common_words(self): def test_dont_remove_objects_with_only_common_words(self):
d = { d = {
'common': set([NamedObject("common uncommon",True) for i in range(50)] + [NamedObject("common",True)]), 'common': set([NamedObject("common uncommon",True) for i in range(50)] + [NamedObject("common",True)]),
@@ -263,7 +263,7 @@ class TestCasereduce_common_words:
reduce_common_words(d, 50) reduce_common_words(d, 50)
eq_(1,len(d['common'])) eq_(1,len(d['common']))
eq_(1,len(d['uncommon'])) eq_(1,len(d['uncommon']))
def test_values_still_are_set_instances(self): def test_values_still_are_set_instances(self):
d = { d = {
'common': set([NamedObject("common uncommon",True) for i in range(50)] + [NamedObject("common",True)]), 'common': set([NamedObject("common uncommon",True) for i in range(50)] + [NamedObject("common",True)]),
@@ -272,7 +272,7 @@ class TestCasereduce_common_words:
reduce_common_words(d, 50) reduce_common_words(d, 50)
assert isinstance(d['common'],set) assert isinstance(d['common'],set)
assert isinstance(d['uncommon'],set) assert isinstance(d['uncommon'],set)
def test_dont_raise_KeyError_when_a_word_has_been_removed(self): def test_dont_raise_KeyError_when_a_word_has_been_removed(self):
#If a word has been removed by the reduce, an object in a subsequent common word that #If a word has been removed by the reduce, an object in a subsequent common word that
#contains the word that has been removed would cause a KeyError. #contains the word that has been removed would cause a KeyError.
@@ -285,14 +285,14 @@ class TestCasereduce_common_words:
reduce_common_words(d, 50) reduce_common_words(d, 50)
except KeyError: except KeyError:
self.fail() self.fail()
def test_unpack_fields(self): def test_unpack_fields(self):
#object.words may be fields. #object.words may be fields.
def create_it(): def create_it():
o = NamedObject('') o = NamedObject('')
o.words = [['foo','bar'],['baz']] o.words = [['foo','bar'],['baz']]
return o return o
d = { d = {
'foo': set([create_it() for i in range(50)]) 'foo': set([create_it() for i in range(50)])
} }
@@ -300,7 +300,7 @@ class TestCasereduce_common_words:
reduce_common_words(d, 50) reduce_common_words(d, 50)
except TypeError: except TypeError:
self.fail("must support fields.") self.fail("must support fields.")
def test_consider_a_reduced_common_word_common_even_after_reduction(self): def test_consider_a_reduced_common_word_common_even_after_reduction(self):
#There was a bug in the code that causeda word that has already been reduced not to #There was a bug in the code that causeda word that has already been reduced not to
#be counted as a common word for subsequent words. For example, if 'foo' is processed #be counted as a common word for subsequent words. For example, if 'foo' is processed
@@ -316,7 +316,7 @@ class TestCasereduce_common_words:
eq_(1,len(d['foo'])) eq_(1,len(d['foo']))
eq_(1,len(d['bar'])) eq_(1,len(d['bar']))
eq_(49,len(d['baz'])) eq_(49,len(d['baz']))
class TestCaseget_match: class TestCaseget_match:
def test_simple(self): def test_simple(self):
@@ -328,7 +328,7 @@ class TestCaseget_match:
eq_(['bar','bleh'],m.second.words) eq_(['bar','bleh'],m.second.words)
assert m.first is o1 assert m.first is o1
assert m.second is o2 assert m.second is o2
def test_in(self): def test_in(self):
o1 = NamedObject("foo",True) o1 = NamedObject("foo",True)
o2 = NamedObject("bar",True) o2 = NamedObject("bar",True)
@@ -336,15 +336,15 @@ class TestCaseget_match:
assert o1 in m assert o1 in m
assert o2 in m assert o2 in m
assert object() not in m assert object() not in m
def test_word_weight(self): def test_word_weight(self):
eq_(int((6.0 / 13.0) * 100),get_match(NamedObject("foo bar",True),NamedObject("bar bleh",True),(WEIGHT_WORDS,)).percentage) eq_(int((6.0 / 13.0) * 100),get_match(NamedObject("foo bar",True),NamedObject("bar bleh",True),(WEIGHT_WORDS,)).percentage)
class TestCaseGetMatches: class TestCaseGetMatches:
def test_empty(self): def test_empty(self):
eq_(getmatches([]), []) eq_(getmatches([]), [])
def test_simple(self): def test_simple(self):
l = [NamedObject("foo bar"),NamedObject("bar bleh"),NamedObject("a b c foo")] l = [NamedObject("foo bar"),NamedObject("bar bleh"),NamedObject("a b c foo")]
r = getmatches(l) r = getmatches(l)
@@ -353,7 +353,7 @@ class TestCaseGetMatches:
assert_match(m, 'foo bar', 'bar bleh') assert_match(m, 'foo bar', 'bar bleh')
m = first(m for m in r if m.percentage == 33) #"foo bar" and "a b c foo" m = first(m for m in r if m.percentage == 33) #"foo bar" and "a b c foo"
assert_match(m, 'foo bar', 'a b c foo') assert_match(m, 'foo bar', 'a b c foo')
def test_null_and_unrelated_objects(self): def test_null_and_unrelated_objects(self):
l = [NamedObject("foo bar"),NamedObject("bar bleh"),NamedObject(""),NamedObject("unrelated object")] l = [NamedObject("foo bar"),NamedObject("bar bleh"),NamedObject(""),NamedObject("unrelated object")]
r = getmatches(l) r = getmatches(l)
@@ -361,22 +361,22 @@ class TestCaseGetMatches:
m = r[0] m = r[0]
eq_(m.percentage, 50) eq_(m.percentage, 50)
assert_match(m, 'foo bar', 'bar bleh') assert_match(m, 'foo bar', 'bar bleh')
def test_twice_the_same_word(self): def test_twice_the_same_word(self):
l = [NamedObject("foo foo bar"),NamedObject("bar bleh")] l = [NamedObject("foo foo bar"),NamedObject("bar bleh")]
r = getmatches(l) r = getmatches(l)
eq_(1,len(r)) eq_(1,len(r))
def test_twice_the_same_word_when_preworded(self): def test_twice_the_same_word_when_preworded(self):
l = [NamedObject("foo foo bar",True),NamedObject("bar bleh",True)] l = [NamedObject("foo foo bar",True),NamedObject("bar bleh",True)]
r = getmatches(l) r = getmatches(l)
eq_(1,len(r)) eq_(1,len(r))
def test_two_words_match(self): def test_two_words_match(self):
l = [NamedObject("foo bar"),NamedObject("foo bar bleh")] l = [NamedObject("foo bar"),NamedObject("foo bar bleh")]
r = getmatches(l) r = getmatches(l)
eq_(1,len(r)) eq_(1,len(r))
def test_match_files_with_only_common_words(self): def test_match_files_with_only_common_words(self):
#If a word occurs more than 50 times, it is excluded from the matching process #If a word occurs more than 50 times, it is excluded from the matching process
#The problem with the common_word_threshold is that the files containing only common #The problem with the common_word_threshold is that the files containing only common
@@ -385,18 +385,18 @@ class TestCaseGetMatches:
l = [NamedObject("foo") for i in range(50)] l = [NamedObject("foo") for i in range(50)]
r = getmatches(l) r = getmatches(l)
eq_(1225,len(r)) eq_(1225,len(r))
def test_use_words_already_there_if_there(self): def test_use_words_already_there_if_there(self):
o1 = NamedObject('foo') o1 = NamedObject('foo')
o2 = NamedObject('bar') o2 = NamedObject('bar')
o2.words = ['foo'] o2.words = ['foo']
eq_(1, len(getmatches([o1,o2]))) eq_(1, len(getmatches([o1,o2])))
def test_job(self): def test_job(self):
def do_progress(p,d=''): def do_progress(p,d=''):
self.log.append(p) self.log.append(p)
return True return True
j = job.Job(1,do_progress) j = job.Job(1,do_progress)
self.log = [] self.log = []
s = "foo bar" s = "foo bar"
@@ -404,12 +404,12 @@ class TestCaseGetMatches:
assert len(self.log) > 2 assert len(self.log) > 2
eq_(0,self.log[0]) eq_(0,self.log[0])
eq_(100,self.log[-1]) eq_(100,self.log[-1])
def test_weight_words(self): def test_weight_words(self):
l = [NamedObject("foo bar"),NamedObject("bar bleh")] l = [NamedObject("foo bar"),NamedObject("bar bleh")]
m = getmatches(l, weight_words=True)[0] m = getmatches(l, weight_words=True)[0]
eq_(int((6.0 / 13.0) * 100),m.percentage) eq_(int((6.0 / 13.0) * 100),m.percentage)
def test_similar_word(self): def test_similar_word(self):
l = [NamedObject("foobar"),NamedObject("foobars")] l = [NamedObject("foobar"),NamedObject("foobars")]
eq_(len(getmatches(l, match_similar_words=True)), 1) eq_(len(getmatches(l, match_similar_words=True)), 1)
@@ -420,16 +420,16 @@ class TestCaseGetMatches:
eq_(len(getmatches(l, match_similar_words=True)), 1) eq_(len(getmatches(l, match_similar_words=True)), 1)
l = [NamedObject("foobar"),NamedObject("foosbar")] l = [NamedObject("foobar"),NamedObject("foosbar")]
eq_(len(getmatches(l, match_similar_words=True)), 1) eq_(len(getmatches(l, match_similar_words=True)), 1)
def test_single_object_with_similar_words(self): def test_single_object_with_similar_words(self):
l = [NamedObject("foo foos")] l = [NamedObject("foo foos")]
eq_(len(getmatches(l, match_similar_words=True)), 0) eq_(len(getmatches(l, match_similar_words=True)), 0)
def test_double_words_get_counted_only_once(self): def test_double_words_get_counted_only_once(self):
l = [NamedObject("foo bar foo bleh"),NamedObject("foo bar bleh bar")] l = [NamedObject("foo bar foo bleh"),NamedObject("foo bar bleh bar")]
m = getmatches(l)[0] m = getmatches(l)[0]
eq_(75,m.percentage) eq_(75,m.percentage)
def test_with_fields(self): def test_with_fields(self):
o1 = NamedObject("foo bar - foo bleh") o1 = NamedObject("foo bar - foo bleh")
o2 = NamedObject("foo bar - bleh bar") o2 = NamedObject("foo bar - bleh bar")
@@ -437,7 +437,7 @@ class TestCaseGetMatches:
o2.words = getfields(o2.name) o2.words = getfields(o2.name)
m = getmatches([o1, o2])[0] m = getmatches([o1, o2])[0]
eq_(50, m.percentage) eq_(50, m.percentage)
def test_with_fields_no_order(self): def test_with_fields_no_order(self):
o1 = NamedObject("foo bar - foo bleh") o1 = NamedObject("foo bar - foo bleh")
o2 = NamedObject("bleh bang - foo bar") o2 = NamedObject("bleh bang - foo bar")
@@ -445,11 +445,11 @@ class TestCaseGetMatches:
o2.words = getfields(o2.name) o2.words = getfields(o2.name)
m = getmatches([o1, o2], no_field_order=True)[0] m = getmatches([o1, o2], no_field_order=True)[0]
eq_(m.percentage, 50) eq_(m.percentage, 50)
def test_only_match_similar_when_the_option_is_set(self): def test_only_match_similar_when_the_option_is_set(self):
l = [NamedObject("foobar"),NamedObject("foobars")] l = [NamedObject("foobar"),NamedObject("foobars")]
eq_(len(getmatches(l, match_similar_words=False)), 0) eq_(len(getmatches(l, match_similar_words=False)), 0)
def test_dont_recurse_do_match(self): def test_dont_recurse_do_match(self):
# with nosetests, the stack is increased. The number has to be high enough not to be failing falsely # with nosetests, the stack is increased. The number has to be high enough not to be failing falsely
sys.setrecursionlimit(100) sys.setrecursionlimit(100)
@@ -460,19 +460,19 @@ class TestCaseGetMatches:
self.fail() self.fail()
finally: finally:
sys.setrecursionlimit(1000) sys.setrecursionlimit(1000)
def test_min_match_percentage(self): def test_min_match_percentage(self):
l = [NamedObject("foo bar"),NamedObject("bar bleh"),NamedObject("a b c foo")] l = [NamedObject("foo bar"),NamedObject("bar bleh"),NamedObject("a b c foo")]
r = getmatches(l, min_match_percentage=50) r = getmatches(l, min_match_percentage=50)
eq_(1,len(r)) #Only "foo bar" / "bar bleh" should match eq_(1,len(r)) #Only "foo bar" / "bar bleh" should match
def test_MemoryError(self, monkeypatch): def test_MemoryError(self, monkeypatch):
@log_calls @log_calls
def mocked_match(first, second, flags): def mocked_match(first, second, flags):
if len(mocked_match.calls) > 42: if len(mocked_match.calls) > 42:
raise MemoryError() raise MemoryError()
return Match(first, second, 0) return Match(first, second, 0)
objects = [NamedObject() for i in range(10)] # results in 45 matches objects = [NamedObject() for i in range(10)] # results in 45 matches
monkeypatch.setattr(engine, 'get_match', mocked_match) monkeypatch.setattr(engine, 'get_match', mocked_match)
try: try:
@@ -480,13 +480,13 @@ class TestCaseGetMatches:
except MemoryError: except MemoryError:
self.fail('MemorryError must be handled') self.fail('MemorryError must be handled')
eq_(42, len(r)) eq_(42, len(r))
class TestCaseGetMatchesByContents: class TestCaseGetMatchesByContents:
def test_dont_compare_empty_files(self): def test_dont_compare_empty_files(self):
o1, o2 = no(size=0), no(size=0) o1, o2 = no(size=0), no(size=0)
assert not getmatches_by_contents([o1, o2]) assert not getmatches_by_contents([o1, o2])
class TestCaseGroup: class TestCaseGroup:
def test_empy(self): def test_empy(self):
@@ -494,7 +494,7 @@ class TestCaseGroup:
eq_(None,g.ref) eq_(None,g.ref)
eq_([],g.dupes) eq_([],g.dupes)
eq_(0,len(g.matches)) eq_(0,len(g.matches))
def test_add_match(self): def test_add_match(self):
g = Group() g = Group()
m = get_match(NamedObject("foo",True),NamedObject("bar",True)) m = get_match(NamedObject("foo",True),NamedObject("bar",True))
@@ -503,7 +503,7 @@ class TestCaseGroup:
eq_([m.second],g.dupes) eq_([m.second],g.dupes)
eq_(1,len(g.matches)) eq_(1,len(g.matches))
assert m in g.matches assert m in g.matches
def test_multiple_add_match(self): def test_multiple_add_match(self):
g = Group() g = Group()
o1 = NamedObject("a",True) o1 = NamedObject("a",True)
@@ -529,13 +529,13 @@ class TestCaseGroup:
g.add_match(get_match(o3,o4)) g.add_match(get_match(o3,o4))
eq_([o2,o3,o4],g.dupes) eq_([o2,o3,o4],g.dupes)
eq_(6,len(g.matches)) eq_(6,len(g.matches))
def test_len(self): def test_len(self):
g = Group() g = Group()
eq_(0,len(g)) eq_(0,len(g))
g.add_match(get_match(NamedObject("foo",True),NamedObject("bar",True))) g.add_match(get_match(NamedObject("foo",True),NamedObject("bar",True)))
eq_(2,len(g)) eq_(2,len(g))
def test_add_same_match_twice(self): def test_add_same_match_twice(self):
g = Group() g = Group()
m = get_match(NamedObject("foo",True),NamedObject("foo",True)) m = get_match(NamedObject("foo",True),NamedObject("foo",True))
@@ -545,7 +545,7 @@ class TestCaseGroup:
g.add_match(m) g.add_match(m)
eq_(2,len(g)) eq_(2,len(g))
eq_(1,len(g.matches)) eq_(1,len(g.matches))
def test_in(self): def test_in(self):
g = Group() g = Group()
o1 = NamedObject("foo",True) o1 = NamedObject("foo",True)
@@ -554,7 +554,7 @@ class TestCaseGroup:
g.add_match(get_match(o1,o2)) g.add_match(get_match(o1,o2))
assert o1 in g assert o1 in g
assert o2 in g assert o2 in g
def test_remove(self): def test_remove(self):
g = Group() g = Group()
o1 = NamedObject("foo",True) o1 = NamedObject("foo",True)
@@ -571,7 +571,7 @@ class TestCaseGroup:
g.remove_dupe(o1) g.remove_dupe(o1)
eq_(0,len(g.matches)) eq_(0,len(g.matches))
eq_(0,len(g)) eq_(0,len(g))
def test_remove_with_ref_dupes(self): def test_remove_with_ref_dupes(self):
g = Group() g = Group()
o1 = NamedObject("foo",True) o1 = NamedObject("foo",True)
@@ -584,7 +584,7 @@ class TestCaseGroup:
o2.is_ref = True o2.is_ref = True
g.remove_dupe(o3) g.remove_dupe(o3)
eq_(0,len(g)) eq_(0,len(g))
def test_switch_ref(self): def test_switch_ref(self):
o1 = NamedObject(with_words=True) o1 = NamedObject(with_words=True)
o2 = NamedObject(with_words=True) o2 = NamedObject(with_words=True)
@@ -598,7 +598,7 @@ class TestCaseGroup:
assert o2 is g.ref assert o2 is g.ref
g.switch_ref(NamedObject('',True)) g.switch_ref(NamedObject('',True))
assert o2 is g.ref assert o2 is g.ref
def test_switch_ref_from_ref_dir(self): def test_switch_ref_from_ref_dir(self):
# When the ref dupe is from a ref dir, switch_ref() does nothing # When the ref dupe is from a ref dir, switch_ref() does nothing
o1 = no(with_words=True) o1 = no(with_words=True)
@@ -608,7 +608,7 @@ class TestCaseGroup:
g.add_match(get_match(o1, o2)) g.add_match(get_match(o1, o2))
g.switch_ref(o2) g.switch_ref(o2)
assert o1 is g.ref assert o1 is g.ref
def test_get_match_of(self): def test_get_match_of(self):
g = Group() g = Group()
for m in get_match_triangle(): for m in get_match_triangle():
@@ -619,7 +619,7 @@ class TestCaseGroup:
assert o in m assert o in m
assert g.get_match_of(NamedObject('',True)) is None assert g.get_match_of(NamedObject('',True)) is None
assert g.get_match_of(g.ref) is None assert g.get_match_of(g.ref) is None
def test_percentage(self): def test_percentage(self):
#percentage should return the avg percentage in relation to the ref #percentage should return the avg percentage in relation to the ref
m1,m2,m3 = get_match_triangle() m1,m2,m3 = get_match_triangle()
@@ -638,11 +638,11 @@ class TestCaseGroup:
g.add_match(m1) g.add_match(m1)
g.add_match(m2) g.add_match(m2)
eq_(66,g.percentage) eq_(66,g.percentage)
def test_percentage_on_empty_group(self): def test_percentage_on_empty_group(self):
g = Group() g = Group()
eq_(0,g.percentage) eq_(0,g.percentage)
def test_prioritize(self): def test_prioritize(self):
m1,m2,m3 = get_match_triangle() m1,m2,m3 = get_match_triangle()
o1 = m1.first o1 = m1.first
@@ -658,7 +658,7 @@ class TestCaseGroup:
assert o1 is g.ref assert o1 is g.ref
assert g.prioritize(lambda x:x.name) assert g.prioritize(lambda x:x.name)
assert o3 is g.ref assert o3 is g.ref
def test_prioritize_with_tie_breaker(self): def test_prioritize_with_tie_breaker(self):
# if the ref has the same key as one or more of the dupe, run the tie_breaker func among them # if the ref has the same key as one or more of the dupe, run the tie_breaker func among them
g = get_test_group() g = get_test_group()
@@ -666,9 +666,9 @@ class TestCaseGroup:
tie_breaker = lambda ref, dupe: dupe is o3 tie_breaker = lambda ref, dupe: dupe is o3
g.prioritize(lambda x:0, tie_breaker) g.prioritize(lambda x:0, tie_breaker)
assert g.ref is o3 assert g.ref is o3
def test_prioritize_with_tie_breaker_runs_on_all_dupes(self): def test_prioritize_with_tie_breaker_runs_on_all_dupes(self):
# Even if a dupe is chosen to switch with ref with a tie breaker, we still run the tie breaker # Even if a dupe is chosen to switch with ref with a tie breaker, we still run the tie breaker
# with other dupes and the newly chosen ref # with other dupes and the newly chosen ref
g = get_test_group() g = get_test_group()
o1, o2, o3 = g.ordered o1, o2, o3 = g.ordered
@@ -678,7 +678,7 @@ class TestCaseGroup:
tie_breaker = lambda ref, dupe: dupe.foo > ref.foo tie_breaker = lambda ref, dupe: dupe.foo > ref.foo
g.prioritize(lambda x:0, tie_breaker) g.prioritize(lambda x:0, tie_breaker)
assert g.ref is o3 assert g.ref is o3
def test_prioritize_with_tie_breaker_runs_only_on_tie_dupes(self): def test_prioritize_with_tie_breaker_runs_only_on_tie_dupes(self):
# The tie breaker only runs on dupes that had the same value for the key_func # The tie breaker only runs on dupes that had the same value for the key_func
g = get_test_group() g = get_test_group()
@@ -693,7 +693,7 @@ class TestCaseGroup:
tie_breaker = lambda ref, dupe: dupe.bar > ref.bar tie_breaker = lambda ref, dupe: dupe.bar > ref.bar
g.prioritize(key_func, tie_breaker) g.prioritize(key_func, tie_breaker)
assert g.ref is o2 assert g.ref is o2
def test_prioritize_with_ref_dupe(self): def test_prioritize_with_ref_dupe(self):
# when the ref dupe of a group is from a ref dir, make it stay on top. # when the ref dupe of a group is from a ref dir, make it stay on top.
g = get_test_group() g = get_test_group()
@@ -702,7 +702,7 @@ class TestCaseGroup:
o2.size = 2 o2.size = 2
g.prioritize(lambda x: -x.size) g.prioritize(lambda x: -x.size)
assert g.ref is o1 assert g.ref is o1
def test_prioritize_nothing_changes(self): def test_prioritize_nothing_changes(self):
# prioritize() returns False when nothing changes in the group. # prioritize() returns False when nothing changes in the group.
g = get_test_group() g = get_test_group()
@@ -710,14 +710,14 @@ class TestCaseGroup:
g[1].name = 'b' g[1].name = 'b'
g[2].name = 'c' g[2].name = 'c'
assert not g.prioritize(lambda x:x.name) assert not g.prioritize(lambda x:x.name)
def test_list_like(self): def test_list_like(self):
g = Group() g = Group()
o1,o2 = (NamedObject("foo",True),NamedObject("bar",True)) o1,o2 = (NamedObject("foo",True),NamedObject("bar",True))
g.add_match(get_match(o1,o2)) g.add_match(get_match(o1,o2))
assert g[0] is o1 assert g[0] is o1
assert g[1] is o2 assert g[1] is o2
def test_discard_matches(self): def test_discard_matches(self):
g = Group() g = Group()
o1,o2,o3 = (NamedObject("foo",True),NamedObject("bar",True),NamedObject("baz",True)) o1,o2,o3 = (NamedObject("foo",True),NamedObject("bar",True),NamedObject("baz",True))
@@ -726,13 +726,13 @@ class TestCaseGroup:
g.discard_matches() g.discard_matches()
eq_(1,len(g.matches)) eq_(1,len(g.matches))
eq_(0,len(g.candidates)) eq_(0,len(g.candidates))
class TestCaseget_groups: class TestCaseget_groups:
def test_empty(self): def test_empty(self):
r = get_groups([]) r = get_groups([])
eq_([],r) eq_([],r)
def test_simple(self): def test_simple(self):
l = [NamedObject("foo bar"),NamedObject("bar bleh")] l = [NamedObject("foo bar"),NamedObject("bar bleh")]
matches = getmatches(l) matches = getmatches(l)
@@ -742,7 +742,7 @@ class TestCaseget_groups:
g = r[0] g = r[0]
assert g.ref is m.first assert g.ref is m.first
eq_([m.second],g.dupes) eq_([m.second],g.dupes)
def test_group_with_multiple_matches(self): def test_group_with_multiple_matches(self):
#This results in 3 matches #This results in 3 matches
l = [NamedObject("foo"),NamedObject("foo"),NamedObject("foo")] l = [NamedObject("foo"),NamedObject("foo"),NamedObject("foo")]
@@ -751,7 +751,7 @@ class TestCaseget_groups:
eq_(1,len(r)) eq_(1,len(r))
g = r[0] g = r[0]
eq_(3,len(g)) eq_(3,len(g))
def test_must_choose_a_group(self): def test_must_choose_a_group(self):
l = [NamedObject("a b"),NamedObject("a b"),NamedObject("b c"),NamedObject("c d"),NamedObject("c d")] l = [NamedObject("a b"),NamedObject("a b"),NamedObject("b c"),NamedObject("c d"),NamedObject("c d")]
#There will be 2 groups here: group "a b" and group "c d" #There will be 2 groups here: group "a b" and group "c d"
@@ -760,7 +760,7 @@ class TestCaseget_groups:
r = get_groups(matches) r = get_groups(matches)
eq_(2,len(r)) eq_(2,len(r))
eq_(5,len(r[0])+len(r[1])) eq_(5,len(r[0])+len(r[1]))
def test_should_all_go_in_the_same_group(self): def test_should_all_go_in_the_same_group(self):
l = [NamedObject("a b"),NamedObject("a b"),NamedObject("a b"),NamedObject("a b")] l = [NamedObject("a b"),NamedObject("a b"),NamedObject("a b"),NamedObject("a b")]
#There will be 2 groups here: group "a b" and group "c d" #There will be 2 groups here: group "a b" and group "c d"
@@ -768,7 +768,7 @@ class TestCaseget_groups:
matches = getmatches(l) matches = getmatches(l)
r = get_groups(matches) r = get_groups(matches)
eq_(1,len(r)) eq_(1,len(r))
def test_give_priority_to_matches_with_higher_percentage(self): def test_give_priority_to_matches_with_higher_percentage(self):
o1 = NamedObject(with_words=True) o1 = NamedObject(with_words=True)
o2 = NamedObject(with_words=True) o2 = NamedObject(with_words=True)
@@ -782,14 +782,14 @@ class TestCaseget_groups:
assert o1 not in g assert o1 not in g
assert o2 in g assert o2 in g
assert o3 in g assert o3 in g
def test_four_sized_group(self): def test_four_sized_group(self):
l = [NamedObject("foobar") for i in range(4)] l = [NamedObject("foobar") for i in range(4)]
m = getmatches(l) m = getmatches(l)
r = get_groups(m) r = get_groups(m)
eq_(1,len(r)) eq_(1,len(r))
eq_(4,len(r[0])) eq_(4,len(r[0]))
def test_referenced_by_ref2(self): def test_referenced_by_ref2(self):
o1 = NamedObject(with_words=True) o1 = NamedObject(with_words=True)
o2 = NamedObject(with_words=True) o2 = NamedObject(with_words=True)
@@ -799,12 +799,12 @@ class TestCaseget_groups:
m3 = get_match(o3,o2) m3 = get_match(o3,o2)
r = get_groups([m1,m2,m3]) r = get_groups([m1,m2,m3])
eq_(3,len(r[0])) eq_(3,len(r[0]))
def test_job(self): def test_job(self):
def do_progress(p,d=''): def do_progress(p,d=''):
self.log.append(p) self.log.append(p)
return True return True
self.log = [] self.log = []
j = job.Job(1,do_progress) j = job.Job(1,do_progress)
m1,m2,m3 = get_match_triangle() m1,m2,m3 = get_match_triangle()
@@ -813,7 +813,7 @@ class TestCaseget_groups:
get_groups([m1,m2,m3,m4],j) get_groups([m1,m2,m3,m4],j)
eq_(0,self.log[0]) eq_(0,self.log[0])
eq_(100,self.log[-1]) eq_(100,self.log[-1])
def test_group_admissible_discarded_dupes(self): def test_group_admissible_discarded_dupes(self):
# If, with a (A, B, C, D) set, all match with A, but C and D don't match with B and that the # If, with a (A, B, C, D) set, all match with A, but C and D don't match with B and that the
# (A, B) match is the highest (thus resulting in an (A, B) group), still match C and D # (A, B) match is the highest (thus resulting in an (A, B) group), still match C and D
@@ -830,4 +830,4 @@ class TestCaseget_groups:
assert B in g1 assert B in g1
assert C in g2 assert C in g2
assert D in g2 assert D in g2

View File

@@ -1,6 +1,6 @@
# Created By: Virgil Dupras # Created By: Virgil Dupras
# Created On: 2009-10-23 # Created On: 2009-10-23
# Copyright 2013 Hardcoded Software (http://www.hardcoded.net) # Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
# #
# This software is licensed under the "BSD" License as described in the "LICENSE" file, # This software is licensed under the "BSD" License as described in the "LICENSE" file,
# which should be included with this package. The terms are also available at # which should be included with this package. The terms are also available at

View File

@@ -1,6 +1,6 @@
# Created By: Virgil Dupras # Created By: Virgil Dupras
# Created On: 2006/05/02 # Created On: 2006/05/02
# Copyright 2013 Hardcoded Software (http://www.hardcoded.net) # Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
# #
# This software is licensed under the "BSD" License as described in the "LICENSE" file, # This software is licensed under the "BSD" License as described in the "LICENSE" file,
# which should be included with this package. The terms are also available at # which should be included with this package. The terms are also available at

View File

@@ -1,6 +1,6 @@
# Created By: Virgil Dupras # Created By: Virgil Dupras
# Created On: 2006/02/23 # Created On: 2006/02/23
# Copyright 2013 Hardcoded Software (http://www.hardcoded.net) # Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
# This software is licensed under the "BSD" License as described in the "LICENSE" file, # This software is licensed under the "BSD" License as described in the "LICENSE" file,
# which should be included with this package. The terms are also available at # which should be included with this package. The terms are also available at

View File

@@ -1,6 +1,6 @@
# Created By: Virgil Dupras # Created By: Virgil Dupras
# Created On: 2011/09/07 # Created On: 2011/09/07
# Copyright 2013 Hardcoded Software (http://www.hardcoded.net) # Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
# #
# This software is licensed under the "BSD" License as described in the "LICENSE" file, # This software is licensed under the "BSD" License as described in the "LICENSE" file,
# which should be included with this package. The terms are also available at # which should be included with this package. The terms are also available at

View File

@@ -1,6 +1,6 @@
# Created By: Virgil Dupras # Created By: Virgil Dupras
# Created On: 2013-07-28 # Created On: 2013-07-28
# Copyright 2013 Hardcoded Software (http://www.hardcoded.net) # Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
# #
# This software is licensed under the "BSD" License as described in the "LICENSE" file, # This software is licensed under the "BSD" License as described in the "LICENSE" file,
# which should be included with this package. The terms are also available at # which should be included with this package. The terms are also available at

View File

@@ -1,6 +1,6 @@
# Created By: Virgil Dupras # Created By: Virgil Dupras
# Created On: 2006/02/23 # Created On: 2006/02/23
# Copyright 2013 Hardcoded Software (http://www.hardcoded.net) # Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
# #
# This software is licensed under the "BSD" License as described in the "LICENSE" file, # This software is licensed under the "BSD" License as described in the "LICENSE" file,
# which should be included with this package. The terms are also available at # which should be included with this package. The terms are also available at

View File

@@ -1,12 +1,12 @@
# Created By: Virgil Dupras # Created By: Virgil Dupras
# Created On: 2006/03/03 # Created On: 2006/03/03
# Copyright 2013 Hardcoded Software (http://www.hardcoded.net) # Copyright 2014 Hardcoded Software (http://www.hardcoded.net)
# #
# This software is licensed under the "BSD" License as described in the "LICENSE" file, # This software is licensed under the "BSD" License as described in the "LICENSE" file,
# which should be included with this package. The terms are also available at # which should be included with this package. The terms are also available at
# http://www.hardcoded.net/licenses/bsd_license # http://www.hardcoded.net/licenses/bsd_license
from jobprogress import job from hscommon.jobprogress import job
from hscommon.path import Path from hscommon.path import Path
from hscommon.testutil import eq_ from hscommon.testutil import eq_
@@ -25,10 +25,10 @@ class NamedObject:
self.size = size self.size = size
self.path = path self.path = path
self.words = getwords(name) self.words = getwords(name)
def __repr__(self): def __repr__(self):
return '<NamedObject %r %r>' % (self.name, self.path) return '<NamedObject %r %r>' % (self.name, self.path)
no = NamedObject no = NamedObject
@@ -384,7 +384,7 @@ def test_file_evaluates_to_false(fake_fileexists):
class FalseNamedObject(NamedObject): class FalseNamedObject(NamedObject):
def __bool__(self): def __bool__(self):
return False return False
s = Scanner() s = Scanner()
f1 = FalseNamedObject('foobar', path='p1') f1 = FalseNamedObject('foobar', path='p1')
@@ -445,7 +445,7 @@ def test_tie_breaker_same_name_plus_digit(fake_fileexists):
assert group.ref is o5 assert group.ref is o5
def test_partial_group_match(fake_fileexists): def test_partial_group_match(fake_fileexists):
# Count the number of discarded matches (when a file doesn't match all other dupes of the # Count the number of discarded matches (when a file doesn't match all other dupes of the
# group) in Scanner.discarded_file_count # group) in Scanner.discarded_file_count
s = Scanner() s = Scanner()
o1, o2, o3 = no('a b'), no('a'), no('b') o1, o2, o3 = no('a b'), no('a'), no('b')
@@ -476,7 +476,7 @@ def test_dont_group_files_that_dont_exist(tmpdir):
file2.path.remove() file2.path.remove()
return [Match(file1, file2, 100)] return [Match(file1, file2, 100)]
s._getmatches = getmatches s._getmatches = getmatches
assert not s.get_dupe_groups([file1, file2]) assert not s.get_dupe_groups([file1, file2])
def test_folder_scan_exclude_subfolder_matches(fake_fileexists): def test_folder_scan_exclude_subfolder_matches(fake_fileexists):

Some files were not shown because too many files have changed in this diff Show More